ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Results 1901-2000 of 3801 (3696 ASCL, 105 submitted)

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1807.033] LSC: Supervised classification of time-series variable stars

LSC (LINEAR Supervised Classification) trains a number of classifiers, including random forest and K-nearest neighbor, to classify variable stars and compares the results to determine which classifier is most successful. Written in R, the package includes anomaly detection code for testing the application of the selected classifier to new data, thus enabling the creation of highly reliable data sets of classified variable stars.

[ascl:1602.005] LRGS: Linear Regression by Gibbs Sampling

LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.

[ascl:1306.012] LRG DR7 Likelihood Software

This software computes likelihoods for the Luminous Red Galaxies (LRG) data from the Sloan Digital Sky Survey (SDSS). It includes a patch to the existing CAMB software (ascl:1102.026; the February 2009 release) to calculate the theoretical LRG halo power spectrum for various models. The code is written in Fortran 90 and has been tested with the Intel Fortran 90 and GFortran compilers.

[ascl:1902.002] LPNN: Limited Post-Newtonian N-body code for collisionless self-gravitating systems

The Limited Post-Newtonian N-body code (LPNN) simulates post-Newtonian interactions between a massive object and many low-mass objects. The interaction between one massive object and low-mass objects is calculated by post-Newtonian approximation, and the interaction between low-mass objects is calculated by Newtonian gravity. This code is based on the sticky9 code, and can be accelerated with the use of GPU in a CUDA (version 4.2 or earlier) environment.

[ascl:2103.015] LPF: Real-time detection of transient sources in radio data streams

LPF (Live Pulse Finder) provides real-time automated analysis of the radio image data stream at multiple frequencies. The fully automated GPU-based machine-learning backed pipeline performs source detection, association, flux measurement and physical parameter inference. At the end of the pipeline, an alert of a significant detection of a transient event can be sent out and the data saved for further investigation.

[ascl:1501.007] LP-VIcode: La Plata Variational Indicators Code

LP-VIcode computes variational chaos indicators (CIs) quickly and easily. The following CIs are included:

  • Lyapunov Indicators, also known as Lyapunov Characteristic Exponents, Lyapunov Characteristic Numbers or Finite Time Lyapunov Characteristic Numbers (LIs)
  • Mean Exponential Growth factor of Nearby Orbits (MEGNO)
  • Slope Estimation of the largest Lyapunov Characteristic Exponent (SElLCE)
  • Smaller ALignment Index (SALI)
  • Generalized ALignment Index (GALI)
  • Fast Lyapunov Indicator (FLI)
  • Orthogonal Fast Lyapunov Indicator (OFLI)
  • Spectral Distance (SD)
  • dynamical Spectra of Stretching Numbers (SSNs)
  • Relative Lyapunov Indicator (RLI)

[ascl:1010.038] Low Resolution Spectral Templates For AGNs and Galaxies From 0.03 -- 30 microns

We present a set of low resolution empirical SED templates for AGNs and galaxies in the wavelength range from 0.03 to 30 microns based on the multi-wavelength photometric observations of the NOAO Deep-Wide Field Survey Bootes field and the spectroscopic observations of the AGN and Galaxy Evolution Survey. Our training sample is comprised of 14448 galaxies in the redshift range 0<~z<~1 and 5347 likely AGNs in the range 0<~z<~5.58. We use our templates to determine photometric redshifts for galaxies and AGNs. While they are relatively accurate for galaxies, their accuracies for AGNs are a strong function of the luminosity ratio between the AGN and galaxy components. Somewhat surprisingly, the relative luminosities of the AGN and its host are well determined even when the photometric redshift is significantly in error. We also use our templates to study the mid-IR AGN selection criteria developed by Stern et al.(2005) and Lacy et al.(2004). We find that the Stern et al.(2005) criteria suffers from significant incompleteness when there is a strong host galaxy component and at z =~ 4.5, when the broad Halpha emission line is redshifted into the [3.6] band, but that it is little contaminated by low and intermediate redshift galaxies. The Lacy et al.(2004) criterion is not affected by incompleteness at z =~ 4.5 and is somewhat less affected by strong galaxy host components, but is heavily contaminated by low redshift star forming galaxies. Finally, we use our templates to predict the color-color distribution of sources in the upcoming WISE mission and define a color criterion to select AGNs analogous to those developed for IRAC photometry. We estimate that in between 640,000 and 1,700,000 AGNs will be identified by these criteria, but will have serious completeness problems for z >~ 3.4.

[ascl:2207.017] LOTUS: 1D Non-LTE stellar parameter determination via Equivalent Width method

LOTUS (non-LTE Optimization Tool Utilized for the derivation of atmospheric Stellar parameters) derives stellar parameters via Equivalent Width (EW) method with the assumption of 1D non-local thermodynamic equilibrium. It mainly applies on the spectroscopic data from high resolution spectral survey. It can provide extremely accurate measurement of stellar parameters compared with non-spectroscopic analysis from benchmark stars. LOTUS provides a fast optimizer for obtaining stellar parameters based on Differential Evolution algorithm, well constrained uncertainty of derived stellar parameters from slice-sampling MCMC from PyMC3 (ascl:1610.016), and can interpolate the Curve of Growth from theoretical EW grid under the assumptions of LTE and Non-LTE. It also visualizes excitation and ionization balance when at the optimal combination of stellar parameters.

[ascl:1308.002] LOSSCONE: Capture rates of stars by a supermassive black hole

LOSSCONE computes the rates of capture of stars by supermassive black holes. It uses a stationary and time-dependent solutions for the Fokker-Planck equation describing the evolution of the distribution function of stars due to two-body relaxation, and works for arbitrary spherical and axisymmetric galactic models that are provided by the user in the form of M(r), the cumulative mass as a function of radius.

[ascl:1309.003] LOSP: Liège Orbital Solution Package

LOSP is a FORTRAN77 numerical package that computes the orbital parameters of spectroscopic binaries. The package deals with SB1 and SB2 systems and is able to adjust either circular or eccentric orbits through a weighted fit.

[ascl:2401.006] LoSoTo: LOFAR solutions tool

LoSoTo (LOFAR Solution Tool) performs a variety of operations on H5parm data, which is based on the HDF5 format; it isolates direction independent systematic effects and can therefore be transferred to the target field. Subsets of data can be selected for each operation using lists of axes values, regular expressions, or intervals. The LoSoTo package stores solutions in arrays organized in a hierarchical fashion; this provides flexibility and preserves performance. The code can, for example, extract Faraday rotation from RR/LL phase solutions or a rotation matrix, clip solutions around the median, and calculate the ionospheric structure function. LoSoTo includes an outlier flagging procedure, normalizes solutions to a given value, and offers an advanced plotting routine, and many other operations.

[ascl:1608.018] LORENE: Spectral methods differential equations solver

LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.

[ascl:2401.014] LoRD: Locate Reconnection Distribution

LoRD (Locate Reconnection Distribution) identifies the locations and structures of 3D magnetic reconnection within discrete magnetic field data. The toolkit contains three main functions; the first, ARD (Analyze Reconnection Distribution) locates the grids undergoing reconnection without null points and also recognizes the local configurations of reconnection sites. ANP (Analyze Null Points) locates and classifies the 3D null points, and APNP (Analyze Projected Null Points) analyzes the 2D neutral points projected on a plane near a cell. LoRD is written in Matlab and the toolkit contains demo scripts.

[ascl:2301.007] LoLLiPoP: Low-L Likelihood Polarized for Planck

LoLLiPoP is a Planck low-l polarization likelihood based on cross-power-spectra for which the bias is zero when the noise is uncorrelated between maps. It uses a modified approximation to apply to cross-power spectra and is interfaced with the Cobaya (ascl:1910.019) MCMC sampler. Cross-spectra are computed on the CMB maps from Commander component separation applied on each detset-split Planck frequency maps.

[ascl:2104.030] lofti_gaiaDR2: Orbit fitting with Gaia astrometry

Lofti_gaia fits orbital parameters for one wide stellar binary relative to the other, when both objects are resolved in Gaia DR2. It takes as input only the Gaia DR2 source id of the two components, and their masses. It retrieves the relevant parameters from the Gaia archive, computes observational constraints for them, and fits orbital parameters to those measurements. It assumes the two components are bound in an elliptical orbit.

[submitted] LOFAR H5plot

Calibration solutions for the LOFAR radio telescope are stored in a 5-dimensional (time, frequency, station, polarisation and direction in the sky) HDF5 table. H5plot is a GUI application focussing on interactive visual inspection of these calibration solutions.

[ascl:2004.001] Locus: Optimized differential photometry

Locus implements the Locus Algorithm, which maximizes the performance of differential photometry systems by optimizing the number and quality of reference stars in the Field of View with the target.

[submitted] loci: Smooth Cubic Multivariate Local Interpolations

loci is a shared library for interpolations in up to 4 dimensions. It is written in C and can be used with C/C++, Python and others. In order to calculate the coefficients of the cubic polynom, only local values are used: The data itself and all combinations of first-order derivatives, i.e. in 2D f_x, f_y and f_xy. This is in contrast to splines, where the coefficients are not calculated using derivatives, but non-local data, which can lead to over-smoothing the result.

[ascl:1606.014] Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python

Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.

[ascl:1706.005] LMC: Logarithmantic Monte Carlo

LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

[ascl:1906.020] LIZARD: Particle initial conditions for cosmological simulations

LIZARD (Lagrangian Initialization of Zeldovich Amplitudes for Resimulations of Displacements) creates particle initial conditions for cosmological simulations using the Zel'dovich approximation for the matter and velocity power spectrum.

[ascl:1906.011] Lizard: An extensible Cyclomatic Complexity Analyzer

Lizard is an extensible Cyclomatic Complexity Analyzer for imperative programming languages including C/C++/C#, Python, Java, and Javascript. It counts the nloc (lines of code without comments) and CCN (cyclomatic complexity number), and takes a token count of functions and a parameter count of functions. It also does copy-paste detection (code clone detection/code duplicate detection) and many other forms of static code analysis. Lizard is often used in software-related research and calculates how complex the code looks rather than how complex the code really is; thought it's often very hard to get all the included folders and files right when they are complicated, that accuracy is not needed to determine cyclomatic complexity, which can be useful for measuring the maintainability of a software package.

[ascl:1902.005] LiveData: Data reduction pipeline

LiveData is a multibeam single-dish data reduction system for bandpass calibration and gridding. It is used for processing Parkes multibeam and Mopra data.

[ascl:1112.009] LISACode: A scientific simulator of LISA

LISACode is a simulator of the LISA mission. Its ambition is to achieve a new degree of sophistication allowing to map, as closely as possible, the impact of the different subsystems on the measurements. Its also a useful tool for generating realistic data including several kind of sources (Massive Black Hole binaries, EMRIs, cosmic string cusp, stochastic background, etc) and for preparing their analysis. It’s fully integrated to the Mock LISA Data Challenge. LISACode is not a detailed simulator at the engineering level but rather a tool whose purpose is to bridge the gap between the basic principles of LISA and a future, sophisticated end-to-end simulator.

[ascl:2205.017] LiSA: LIghtweight Source finding Algorithms for analysis of HI spectral data

The LIghtweight Source finding Algorithms (LiSA) library finds HI sources in next generation radio surveys. LiSA can analyze input data cubes of any size with pipelines that automatically decompose data into different domains for parallel distributed analysis. For source finding, the library contains python modules for wavelet denoising of 3D spatial and spectral data, and robust automatic source finding using null-hypothesis testing. The source-finding algorithms all have options to automatically choose parameters, minimizing the need for manual fine tuning. Finally, LiSA also contains neural network architectures for classification and characterization of 3D spectral data.

[ascl:1601.007] LIRA: Low-counts Image Reconstruction and Analysis

LIRA (Low-counts Image Reconstruction and Analysis) deconvolves any unknown sky components, provides a fully Poisson 'goodness-of-fit' for any best-fit model, and quantifies uncertainties on the existence and shape of unknown sky. It does this without resorting to χ2 or rebinning, which can lose high-resolution information. It is written in R and requires the FITSio package.

[ascl:1602.006] LIRA: LInear Regression in Astronomy

LIRA (LInear Regression in Astronomy) performs Bayesian linear regression that accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables), time evolution of slopes, normalization and scatters, Malmquist and Eddington bias, and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS (ascl:1209.002) library.

[ascl:2412.029] lintsampler: Efficient random sampling via linear interpolation

lintsampler performs linear interpolant sampling to create a set of sample points from a density function. The code uses the evaluation of the density at the two endpoints of 1D interval, or the four corners of a 2D rectangle, or generally the 2k vertices of a dimensional hyperbox (or a series of such hyperboxes, e.g., the cells of a k-dimensional grid) to draw random samples within the hyperbox. lintsampler works by evaluating a given PDF on the nodes of a grid (or grid-like structure, such as a tree); the number of evaluations (and memory occupancy) grows exponentially with the number of dimensions.

[ascl:1504.019] LineProf: Line Profile Indicators

LineProf implements a series of line-profile analysis indicators and evaluates its correlation with RV data. It receives as input a list of Cross-Correlation Functions and an optional list of associated RV. It evaluates the line-profile according to the indicators and compares it with the computed RV if no associated RV is provided, or with the provided RV otherwise.

[ascl:2104.027] linemake: Line list generator

linemake generates formatted and curated atomic and molecular line lists suitable for spectral synthesis work. It is lightweight and easy-to-use. The code requires that the requested beginning and ending wavelengths not bridge the divide between two files of atomic line data; in such cases, run the code twice, once on either side of the divide, to generate the desired lists.

[ascl:2007.012] Line-Stacker: Spectral lines stacking

Line-Stacker stacks both 3D cubes or already extracted spectra and is an extension of Stacker (ascl:1912.019). It is an ensemble of both CASA tasks and native python tasks. Line-Stacker supports image stacking and some additional tools, allowing further analysis of the stack product, are also included in the module.

[ascl:2303.002] line_selections: Automatic line detection for large spectroscopic surveys

The Python code line_selections reads synthetic "full" spectra and elemental spectra, automatically identifies the detectable lines at a given resolution (provided the linelist used to compute the spectra), and returns a table containing various properties of the lines (e.g., purity, central wavelength, and depth). The code then stores the information in a pandas DataFrame. line_selections demonstrates where chemical information is present in a stellar spectrum, and allows the user to optimize observational strategies, such as choosing resolution and spectra windows, as well as analysis codes with the application of high-quality masks.

[ascl:2307.042] LIMpy: Line Intensity Mapping in Python

LIMpy models and analyzes multi-line intensity maps of CII (158 µ), OIII (88 µ), and CO (1-0) to CO (13-12) transitions. It can be used as an analytic model for star formation rate, to simulate line intensity maps based on halo catalogs, and to calculate the power spectrum from simulated maps and the cross-correlated signal between two separate lines. Among other things, LIMpy can also create multi-line luminosity models and determine the multi-line intensity power spectrum.

[ascl:1710.023] LIMEPY: Lowered Isothermal Model Explorer in PYthon

LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

[ascl:1107.012] LIME: Flexible, Non-LTE Line Excitation and Radiation Transfer Method for Millimeter and Far-infrared Wavelengths

LIME solves the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. The code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons. Various physical models can be used as input, ranging from analytical descriptions over tabulated models to SPH simulations. To generate the Delaunay grid we sample the input model randomly, but weigh the sample probability with the molecular density and other parameters, and thereby we obtain an average grid point separation that scales with the local opacity. Slow convergence of opaque models becomes traceable; when convergence between the level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produced images that can readily be compared to observations. LIME is particularly well suited for modeling of ALMA data because of the high dynamic range in scales that can be resolved using this type of grid, and can furthermore deal with overlapping lines of multiple molecular and atomic species.

[ascl:2312.017] LimberJack.jl: Auto-differentiable methods for cosmology

LimberJack.jl performs cosmological analyses of 2 point auto- and cross-correlation measurements from galaxy clustering, CMB lensing and weak lensing data. Written in Julia, it obtains gradients for its outputs faster than traditional finite difference methods, making the code greatly synergistic with gradient-based sampling methods such as Hamiltonian Monte Carlo. LimberJack.jl can efficiently exploring parameter spaces with hundreds of dimensions.

[ascl:1906.007] limb-darkening: Limb-darkening coefficients generator

Limb-darkening generates limb-darkening coefficients from ATLAS and PHOENIX model atmospheres using arbitrary response functions. The code uses PyFITS (ascl:1207.009) and has several other dependencies, and produces a folder of results with descriptions of the columns contained in each file.

[ascl:1711.009] Lightning: SED Fitting Package

Lightning is a spectral energy distribution (SED) fitting procedure that quickly and reliably recovers star formation history (SFH) and extinction parameters. The SFH is modeled as discrete steps in time. The code consists of a fully vectorized inversion algorithm to determine SFH step intensities and combines this with a grid-based approach to determine three extinction parameters.

[ascl:1812.013] Lightkurve: Kepler and TESS time series analysis in Python

Lightkurve analyzes astronomical flux time series data, in particular the pixels and light curves obtained by NASA’s Kepler, K2, and TESS exoplanet missions. This community-developed Python package is designed to be user friendly to lower the barrier for students, astronomers, and citizen scientists interested in analyzing data from these missions. Lightkurve provides easy tools to download, inspect, and analyze time series data and its documentation is supported by a large syllabus of tutorials.

[ascl:1408.012] LightcurveMC: An extensible lightcurve simulation program

LightcurveMC is a versatile and easily extended simulation suite for testing the performance of time series analysis tools under controlled conditions. It is designed to be highly modular, allowing new lightcurve types or new analysis tools to be introduced without excessive development overhead. The statistical tools are completely agnostic to how the lightcurve data is generated, and the lightcurve generators are completely agnostic to how the data will be analyzed. The use of fixed random seeds throughout guarantees that the program generates consistent results from run to run.

LightcurveMC can generate periodic light curves having a variety of shapes and stochastic light curves having a variety of correlation properties. It features two error models (Gaussian measurement and signal injection using a randomized sample of base light curves), testing of C1 shape statistic, periodograms, ΔmΔt plots, autocorrelation function plots, peak-finding plots, and Gaussian process regression. The code is written in C++ and R.

[ascl:1403.004] Lightcone: Light-cone generating script

Lightcone works with simulated galaxy data stored in a relational database to rearrange the data in a shape of a light-cone; simulated galaxy data is expected to be in a box volume. The light-cone constructing script works with output from the SAGE semi-analytic model (ascl:1601.006), but will work with any other model that has galaxy positions (and other properties) saved per snapshots of the simulation volume distributed in time. The database configuration file is set up for PostgreSQL RDBMS, but can be modified for use with any other SQL database.

[ascl:2102.006] Lightbeam: Simulate light through weakly-guiding waveguides

Lightbeam simulates the 3D propagation of light through waveguides of arbitrary geometries. This code package is based off of the finite-differences beam propagation method, and employs a transverse adaptive mesh for extra computational efficiency. Also included are tools to simulate adaptive optics systems for use in conjunction with waveguides, useful in astronomical contexts for simulating coupling devices which transfer telescope light to the science instrument.

[ascl:2107.001] light-curve: Light curve analysis toolbox

light-curve implements the extraction of numerous light curve features suitable for processing alert and archival data for the current ZTF and future Vera Rubin Observatory LSST photometric surveys. These high-performance irregular time series processing tools are written in Rust and Python.

[ascl:2012.008] LIFELINE: LIne proFiles in massivE coLliding wInd biNariEs

LIFELINE (LIne proFiles in massivE coLliding wInd biNariEs) simulates the X-ray lines profile in colliding wind binaries. The code is self-consistent and computes the distribution of the wind velocity, the characterization of the wind shock region, and the line profile. In addition to perform the overall computation, LIFELINE can use a pre-computed velocity distribution to compute the shock characteristics and the line profile, or use pre-computed shock characteristics and velocity distributions to compute only the line profile.

[ascl:2209.018] libTheSky: Compute positions of celestial bodies and events

libTheSky compute the positions of celestial bodies, such as the Moon, planets, and stars, and events, including conjunctions and eclipses, with great accuracy. Written in Fortran, libTheSky can use different reference frames (heliocentric, geocentric, topocentric) and coordinate systems (ecliptic, equatorial, galactic; spherical, rectangular), and the user can choose low- or high-accuracy calculations, depending on need.

[ascl:2002.017] libstempo: Python wrapper for Tempo2

libstempo uses the Tempo2 library (ascl:1210.015) to load a pulsar's tim/par files, providing Python access to the TOAs, the residuals, the timing-model parameters, the fit procedure, and more.

[ascl:1402.033] libsharp: Library for spherical harmonic transforms

Libsharp is a collection of algorithms for efficient conversion between maps on the sphere and their spherical harmonic coefficients. It supports a wide range of pixelisations (including HEALPix, GLESP, and ECP). This library is a successor of libpsht (ascl:1010.020); it adds MPI support for distributed memory systems and SHTs of fields with arbitrary spin, and also supports new developments in CPU instruction sets like the Advanced Vector Extensions (AVX) or fused multiply-accumulate (FMA) instructions. libsharp is written in portable C99; it provides an interface accessible to other programming languages such as C++, Fortran, and Python.

[ascl:2104.002] Librarian: The HERA Librarian

The HERA Librarian system keeps track of all the primary data products for the telescope at a given site. The Librarian supports large data volumes and automated data processing capabilities. A web-based application handles human user and automatic requests and interfaces with a backing database and data storage servers. The system supports the long-term data storage of all relevant telescope data, as well as staging data to individual users' directories for processing.

[ascl:1010.020] Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community.

The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc.

Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2.

Development on this project has ended; its successor is libsharp (ascl:1402.033).

[ascl:1612.003] libprofit: Image creation from luminosity profiles

libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

[ascl:1604.002] libpolycomp: Compression/decompression library

Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

[ascl:1502.016] libnova: Celestial mechanics, astrometry and astrodynamics library

libnova is a general purpose, double precision, celestial mechanics, astrometry and astrodynamics library. Among many other calculations, it can calculate aberration, apparent position, proper motion, planetary positions, orbit velocities and lengths, angular separation of bodies, and hyperbolic motion of bodies.

[ascl:1206.009] Libimf

Libimf provides a collection of programming functions based on the general IMF-algorithm by Pflamm-Altenburg & Kroupa (2006).

[ascl:1408.002] LIA: LWS Interactive Analysis

The Long Wavelength Spectrometer (LWS) was one of two complementary spectrometers on the Infrared Space Observatory (ISO). LIA (LWS Interactive Analysis) is used for processing data from the LWS. It provides access to the different processing steps, including visualization of intermediate products and interactive manipulation of the data at each stage.

[ascl:1712.016] LgrbWorldModel: Long-duration Gamma-Ray Burst World Model

LgrbWorldModel is written in Fortran 90 and attempts to model the population distribution of the Long-duration class of Gamma-Ray Bursts (LGRBs) as detected by the NASA's now-defunct Burst And Transient Source Experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO). It is assumed that the population distribution of LGRBs is well fit by a multivariate log-normal distribution. The best-fit parameters of the distribution are then found by maximizing the likelihood of the observed data by BATSE detectors via a native built-in Adaptive Metropolis-Hastings Markov-Chain Monte Carlo (AMH-MCMC) Sampler.

[ascl:1710.016] LGMCA: Local-Generalized Morphological Component Analysis

LGMCA (Local-Generalized Morphological Component Analysis) is an extension to GMCA (ascl:1710.015). Similarly to GMCA, it is a Blind Source Separation method which enforces sparsity. The novel aspect of LGMCA, however, is that the mixing matrix changes across pixels allowing LGMCA to deal with emissions sources which vary spatially. These IDL scripts compute the CMB map from WMAP and Planck data; running LGMCA on the WMAP9 temperature products requires the main script and a selection of mandatory files, algorithm parameters and map parameters.

[ascl:1804.023] LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts

LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.

[ascl:1804.024] LFlGRB: Luminosity function of long gamma-ray bursts

LFlGRB models the luminosity function (LF) of long Gamma Ray Bursts (lGRBs) by using a sample of Swift and Fermi lGRBs to re-derive the parameters of the Yonetoku correlation and self-consistently estimate pseudo-redshifts of all the bursts with unknown redshifts. The GRB formation rate is modeled as the product of the cosmic star formation rate and a GRB formation efficiency for a given stellar mass.

[ascl:1711.018] LExTeS: Link Extraction and Testing Suite

LExTeS (Link Extraction and Testing Suite) extracts hyperlinks from PDF documents, tests the extracted links to see which are broken, and tabulates the results. Though written to support a particular set of PDF documents, the dataset and scripts can be edited for use on other documents.

[ascl:2208.009] LeXInt: Leja Exponential Integrators

LeXInt (Leja interpolation for eXponential Integrators) is a temporal exponential integration package using the method of polynomial interpolation at Leja points. Exponential Rosenbrock (EXPRB) and Exponential Propagation Iterative Runge-Kutta (EPIRK) methods use the Leja interpolation method to compute the functions. For linear PDEs, one can get the exact solution (in time) by directly computing the matrix exponential.

[ascl:2503.025] LESSPayne: Labeling Echelle Spectra with SMHR and Payne

LESSPayne performs semi-automatic analysis for echelle spectra of stars. It uses a neural network emulator to do a full spectrum fit to estimate stellar parameters and performs automatic continuum and equivalent width fits normalization with theoretical masks. The code uses MOOG (ascl:1202.009) for spectrum synthesis fitting, ATLAS model atmosphere interpolation, and equivalent width abundance determination. LESSPayne can also perform automatic abundance uncertainty analysis with error propagation and summary tables, and should be viewed as providing a high-quality initialization for an smhr file that reduces the time for a standard analysis.

[ascl:2503.040] LeR: Gravitational waves lensing rate calculator

LeR calculates detectable rates of gravitational waves events (both lensed and un-lensed events). Written in Python, it performs statistical simulation and forecasting of gravitational wave (GW) events and their rates. The code samples gravitational wave source properties and lens galaxies attributes and source redshifts, and can generate image properties such as source position, magnification, and time delay. The package also calculates detectable merger rates per year. Key features of LeR include efficient sampling, optimized SNR calculations, and systematic archiving of results. LeR is tailored to support both GW population study groups and GW lensing research groups by providing a comprehensive suite of tools for GW event analysis.

[ascl:1108.009] LePHARE: Photometric Analysis for Redshift Estimate

LePHARE is a set of Fortran commands to compute photometric redshifts and to perform SED fitting. The latest version includes new features with FIR fitting and a more complete treatment of physical parameters and uncertainties based on PÉGASE and Bruzual & Charlot population synthesis models. The program is based on a simple chi2 fitting method between the theoretical and observed photometric catalogue. A simulation program is also available in order to generate realistic multi-colour catalogues taking into account observational effects.

[ascl:2404.026] LEO-vetter: Automated vetting for TESS planet candidates

LEO-vetter automatically vets transit signals found in light curve data. Inspired by the Kepler Robovetter (ascl:2012.006), LEO-vetter computes vetting metrics to be compared to a series of pass-fail thresholds. If a signal passes all tests, it is considered a planet candidate (PC). If a signal fails at least one test, it may be either an astrophysical false positive (FP; e.g., eclipsing binary, nearby eclipsing signal) or false alarm (FA; e.g., systematic, stellar variability). Pass-fail thresholds can be changed to suit individual research purposes, and LEO-vetter produces vetting reports for manual inspection of signals. Flux-level vetting can be applied to any light curve dataset (such as Kepler, K2, and TESS), including light curves with mixes of cadences, while pixel-level vetting has been implemented for TESS.

[ascl:1910.011] LEO-Py: Likelihood Estimation of Observational data with Python

LEO-Py uses a novel technique to compute the likelihood function for data sets with uncertain, missing, censored, and correlated values. It uses Gaussian copulas to decouple the correlation structure of variables and their marginal distributions to compute likelihood functions, thus mitigating inconsistent parameter estimates and accounting for non-normal distributions in variables of interest or their errors.

[ascl:1307.005] LENSVIEW: Resolved gravitational lens images modeling

Lensview models resolved gravitational lens systems based on LensMEM but using the Skilling & Bryan MEM algorithm. Though its primary purpose is to find statistically acceptable lens models for lensed images and to reconstruct the surface brightness profile of the source, LENSVIEW can also be used for more simple tasks such as projecting a given source through a lens model to generate a “true” image by conserving surface brightness. The user can specify complicated lens models based on one or more components, such as softened isothermal ellipsoids, point masses, exponential discs, and external shears; LENSVIEW generates a best-fitting source matching the observed data for each specific combination of model parameters.

[ascl:1804.012] Lenstronomy: Multi-purpose gravitational lens modeling software package

Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

[ascl:1602.009] LensTools: Weak Lensing computing tools

LensTools implements a wide range of routines frequently used in Weak Gravitational Lensing, including tools for image analysis, statistical processing and numerical theory predictions. The package offers many useful features, including complete flexibility and easy customization of input/output formats; efficient measurements of power spectrum, PDF, Minkowski functionals and peak counts of convergence maps; survey masks; artificial noise generation engines; easy to compute parameter statistical inferences; ray tracing simulations; and many others. It requires standard numpy and scipy, and depending on tools used, may require Astropy (ascl:1304.002), emcee (ascl:1303.002), matplotlib, and mpi4py.

[ascl:1102.004] LENSTOOL: A Gravitational Lensing Software for Modeling Mass Distribution of Galaxies and Clusters (strong and weak regime)

We describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence. We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('Bayesys'), allowing us to avoid local minima in the likelihood functions. To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos. We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations. We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1-5% error according to the considered lensing configuration. However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre. The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius. Finally, we describe how to rank models quantitatively using the Bayesian evidence. We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way. Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology.

[ascl:1905.017] LensQuEst: CMB Lensing QUadratic Estimator

LensQuEst forecasts the signal-to-noise of CMB lensing estimators (standard, shear-only, magnification-only), generates mock maps, lenses them, and applies various lensing estimators to them. It can manipulate flat sky maps in various ways, including FFT, filtering, power spectrum, generating Gaussian random field, and applying lensing to a map, and evaluate these estimators on flat sky maps.

[ascl:2010.010] lenspyx: Curved-sky python lensed CMB maps simulation package

lenspyx creates curved-sky python lensed CMB maps simulations; the software allows those familiar with healpy (ascl:2008.022) to build very easily lensed CMB simulations. Parallelization is done with openmp. The numerical cost is approximately that of an high-res harmonic transform. lenspyx provides two methods to build a simulation; one method computes a deflected spin-0 healpix map from its alm and deflection field alm, and the other computes a deflected spin-weight Healpix map from its gradient and curl modes and deflection field alm. lenspyx can be used in conjunction with the Planck 2018 CMB lensing pipeline plancklens (ascl:2010.009) to reproduce the published map and band-powers.

[ascl:1705.009] LensPop: Galaxy-galaxy strong lensing population simulation

LensPop simulates observations of the galaxy-galaxy strong lensing population in the Dark Energy Survey (DES), the Large Synoptic Survey Telescope (LSST), and Euclid surveys.

[ascl:1102.025] LensPix: Fast MPI full sky transforms for HEALPix

Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.

[ascl:1010.050] LensPerfect: Gravitational Lens Massmap Reconstructions Yielding Exact Reproduction of All Multiple Images

LensPerfect is a new approach to the massmap reconstruction of strong gravitational lenses. Conventional methods iterate over possible lens models which reproduce the observed multiple image positions well but not exactly. LensPerfect only produces solutions which fit all of the data exactly. Magnifications and shears of the multiple images can also be perfectly constrained to match observations.

[ascl:9903.001] LENSKY: Galactic Microlensing Probability

Given a model for the Galaxy, this program computes the microlensing rate in any direction. Program features include the ability to include the brightness of the lens and to compute the probability of lens detection at any level of lensing amplification. The program limits itself to lensing by single stars of single sources. The program is currently setup to accept input from the Galactic models of Bahcall and Soniera (1982, 1986).

There are three files needed for LENSKY, the Fortran file lensky.for and two input files: galmod.dsk (15 Megs) and galmod.sph (22 Megs). The zip file available below contains all three files. The program generates output to the file lensky.out. The program is pretty self-explanatory past that.

[ascl:2410.010] lensitbiases: rFFT-based flat-sky CMB lensing tools

lensitbiases is an rFFT-based N1 lensing bias calculation and tests. It is tuned for TT, P-only or MV (GMV) like quadratic estimators. It performs rFFT-based N1 and N1 matrix calculations in ~ O(ms) time per lensing multipole for Planck-like config, which allows on-the-fly evaluation of the bias. It also calculates 5 rFFT's of moderate size per L for N1 TT, 20 for PP, and 45 for MV or GMV. lensitbiases is not particularly efficient for low lensing L's, since in this case one must use large boxes.

[ascl:2404.008] LensIt: CMB lensing delensing tools

LensIt enables CMB lensing and CMB delensing using the flat-sky approximation. The package can find the maximum posterior estimation of CMB lensing deflection maps from temperature and/or polarization maps and perform Wiener filtering of masked CMB data and allow for inhomogenous noise, including lensing deflections, using a multigrid preconditioner. It contains fast and accurate simulation libraries for lensed CMB skies, and standard quadratic estimator lensing reconstruction tools. LensIt also includes CMB internal delensing tools, including internal delensing biases calculation for temperature and/or polarization maps.

[ascl:2102.021] lensingGW: Lensing of gravitational waves

lensingGW simulates lensed gravitational waves in ground-based interferometers from arbitrary compact binaries and lens models. Its algorithm resolves strongly lensed images and microimages simultaneously, such as the images resulting from hundreds of microlenses embedded in galaxies and galaxy clusters. It is based on Lenstronomy (ascl:1804.012).

[ascl:2210.027] LensingETC: Lensing Exposure Time Calculator

LensingETC optimizes observing strategies for multi-filter imaging campaigns of galaxy-scale strong lensing systems. It uses the lens modelling software lenstronomy (ascl:1804.012) to simulate and model mock imaging data, forecasts the lens model parameter uncertainties, and optimizes observing strategies.

[ascl:2406.005] Lenser: Measure weak gravitational flexion

Lenser estimates weak gravitational lensing signals, particularly flexion, from real survey data or realistically simulated images. Lenser employs a hybrid of image moment analysis and an Analytic Image Modeling (AIM) analysis. In addition to extracting flexion measurements by fitting a (modified Sérsic) model to a single image of a galaxy, Lenser can do multi-band, multi-epoch fitting. In multi-band mode, Lenser fits a single model to multiple postage stamps, each representing an exposure of a single galaxy in a particular band.

[ascl:1308.004] LensEnt2: Maximum-entropy weak lens reconstruction

LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

[ascl:1505.026] Lensed: Forward parametric modelling of strong lenses

Lensed performs forward parametric modelling of strong lenses. Using a provided model, Lensed renders the expected image of the lensing event for a large number of parameter settings, thereby exploring the space of possible realizations of the observation. It compares the expectation to the observed image by calculating the likelihood that the observation was indeed produced by the assumed model, thus reconstructing the probability distribution over the parameter space of the model. Written in C, the code uses a massively parallel ray-tracing kernel to perform the necessary calculations on a graphics processing unit (GPU), making the precise rendering of the background lensed sources fast and allowing the simultaneous optimization of tens of parameters for the selected model.

[ascl:1905.016] LensCNN: Gravitational lens detector

The LensCNN (Convolutional Neural Network) identifies images containing gravitational lensing systems after being trained and tested on simulated images, recovering most systems that are identifiable by eye.

[ascl:2106.014] Lemon: Linear integral Equations' Monte carlo solver based On the Neumann solution

Lemon solves the radiative transfer (RT) processes that contain scattering. These processes are described by differentio-integral equations with given initial or boundary conditions; Lemon solves these differentio-integral equations, which can be converted into the second kind integral equations of Fredholm. The code then obtains the Neumman solution (a series that consists of infinite terms of multiple integrals) from the Fredholm integral equation, and uses the Monte Carlo (MC) method to evaluate these integrals. Lemon is written in Fortran; IDL programs are included for plotting the results.

[ascl:1809.001] LEMON: Differential photometry pipeline

LEMON is a differential-photometry pipeline, written in Python, that determines the changes in the brightness of astronomical objects over time and compiles their measurements into light curves. This code makes it possible to completely reduce thousands of FITS images of time series in a matter of only a few hours, requiring minimal user interaction.

[ascl:2406.020] LeHaMoC: Leptonic-Hadronic Modeling Code for high-energy astrophysical sources

LeHaMoC simulates high-energy astrophysical sources. It simulates the behavior of relativistic pairs, protons interacting with magnetic fields, and photons in a spherical region. The package contains numerous physical processes, including synchrotron emission and self-absorption, inverse Compton scattering, photon-photon pair production, and adiabatic losses. It also includes proton-photon pion production, proton-photon (Bethe-Heitler) pair production, and proton-proton collisions. LeHaMoC can model expanding spherical sources with a variable magnetic field strength. In addition, three types of external radiation fields can be defined: grey body or black body, power-law, and tabulated.

[ascl:2111.007] LEGWORK: LISA Evolution and Gravitational Wave ORbit Kit

LEGWORK (LISA Evolution and Gravitational Wave ORbit Kit) is a simple package for gravitational wave calculations. It evolves binaries and computes signal-to-noise ratios for binary systems potentially observable with LISA; it also visualizes the results. LEGWORK can also compare different detector sensitivity curves, compute the horizon distance for a collection of sources, and tracks signal-to-noise evolution over time.

[ascl:2010.013] Legolas: Large Eigensystem Generator for One-dimensional pLASmas

Legolas (Large Eigensystem Generator for One-dimensional pLASmas) is a finite element code for MHD spectroscopy of 1D Cartesian/cylindrical equilibria with flow that balance pressure gradients, enriched with various non-adiabatic effects. The code's capabilities range from full spectrum calculations to eigenfunctions of specific modes to full-on parametric studies of various equilibrium configurations in different geometries.

[ascl:2204.003] legacystamps: Retrieve DESI Legacy Imaging Surveys cutouts

The Python module legacystamps provides easy retrieval, both standalone and scripted, of FITS and JPEG cutouts from the DESI Legacy Imaging Surveys through URLs provided by the Legacy Survey viewer.

[ascl:2502.024] legacypipe: Image reduction pipeline for DESI Legacy Imaging Surveys

legacypipe produces DESI Legacy Imaging Surveys (aka the Legacy Surveys). It can process individual exposures from many cameras, including the Dark Energy Camera on the Blanco telescope, the 90Prime camera on the Bok telescope, and the Mosaic3 camera on the Mayall telescope. The code can also process exposures from the Hyper-SuprimeCam on Subaru, the old SuprimeCam on Subaru, MegaCam on the Canada-France-Hawaii Telescope, and image products from the GALEX and WISE satellites. Legacypipe performs source detection, and then measurement via forward-modeling using The Tractor (ascl:1604.008). It generates coadded output images as well as catalogs, plus a variety of metrics useful for understanding the properties of the imaging.

[ascl:2307.054] LEFTfield: Forward modeling of cosmological density fields

LEFTfield forward models cosmological matter density fields and biased tracers of large-scale structure. The model, written in C++ code, is centered around classes encapsulating scalar, vector, and tensor grids. It includes the complete bias expansion at any order in perturbations and captures general expansion histories without relying on the EdS approximation; however, the latter is also implemented and results in substantially smaller computational demands. LEFTfield includes a subset of the nonlinear higher-derivative terms in the bias expansion of general tracers.

[ascl:1104.006] LECTOR: Line-strengths in One-dimensional ASCII Spectra

LECTOR is a Fortran 77 code that measures line-strengths in one dimensional ascii spectra. The code returns the values of the Lick indices as well as those of Vazdekis & Arimoto 1999, Vazdekis et al. 2001, Rose 1994, Jones & Worthey 1995 and Cenarro et al. 2001. The code measures as many indices as you wish if the limits of two pseudocontinua (at each side of the feature) and the feature itself (i.e. Lick-style index definition) are provided. The Lick-style indices could be either expressed in pseudo-equivalent widths or in magnitudes. If requested the program provides index error estimates on the basis of photon statistics.

[ascl:1507.016] Least Asymmetry: Centering Method

Least Asymmetry finds the center of a distribution of light in an image using the least asymmetry method; the code also contains center of light and fitting a Gaussian routines. All functions in Least Asymmetry are designed to take optional weights.

[ascl:1511.018] LDC3: Three-parameter limb darkening coefficient sampling

LDC3 samples physically permissible limb darkening coefficients for the Sing et al. (2009) three-parameter law. It defines the physically permissible intensity profile as being everywhere-positive, monotonically decreasing from center to limb and having a curl at the limb. The approximate sampling method is analytic and thus very fast, reproducing physically permissible samples in 97.3% of random draws (high validity) and encompassing 94.4% of the physically permissible parameter volume (high completeness).

[ascl:2205.013] ld-exosim: Simulate biases using different limb darkening laws

ld-exosim selects the optimal (i.e. best estimator in a MSE sense) limb-darkening law for a given transiting exoplanet lightcurve and calculates the limb-darkening induced biases on various exoplanet parameters. Limb-darkening laws include linear, quadratic, logarithmic, square-root and three-parameter laws.

[ascl:2310.002] lcsim: Light curve simulation code

lcsim creates artificial light curves using two algorithms. The first simulates Gaussian distributed light curves following a specific power spectral density (PSD) freely selectable by the user. The second algorithm simulates light curves following a specific PSD and matching a specific probability density function (PDF). The package provides methods to resample the simulated light curves and add "observational" noise. Furthermore, the package provides an interface to a SQLite3-based database to store and access the simulations.

[ascl:1805.003] lcps: Light curve pre-selection

lcps searches for transit-like features (i.e., dips) in photometric data. Its main purpose is to restrict large sets of light curves to a number of files that show interesting behavior, such as drops in flux. While lcps is adaptable to any format of time series, its I/O module is designed specifically for photometry of the Kepler spacecraft. It extracts the pre-conditioned PDCSAP data from light curves files created by the standard Kepler pipeline. It can also handle csv-formatted ascii files. lcps uses a sliding window technique to compare a section of flux time series with its surroundings. A dip is detected if the flux within the window is lower than a threshold fraction of the surrounding fluxes.

[ascl:1708.017] LCC: Light Curves Classifier

Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio).

Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

[ascl:1405.001] LBLRTM: Line-By-Line Radiative Transfer Model

LBLRTM (Line-By-Line Radiative Transfer Model) is an accurate line-by-line model that is efficient and highly flexible. LBLRTM attributes provide spectral radiance calculations with accuracies consistent with the measurements against which they are validated and with computational times that greatly facilitate the application of the line-by-line approach to current radiative transfer applications. LBLRTM has been extensively validated against atmospheric radiance spectra from the ultra-violet to the sub-millimeter.

LBLRTM's heritage is in FASCODE [Clough et al., 1981, 1992].

[ascl:2301.014] LBL: Line-by-line velocity measurements

LBL derives velocity measurements from high-resolution (R>50 000) datasets by accounting for outliers in the spectra data. It is tailored for fiber-fed multi-order spectrographs, both in optical and near-infrared (up to 2.5µm) domains. The domain is split into individual units (lines) and the velocity and its associated uncertainty are measured within each line and combined through a mixture model to allow for the presence of spurious values. In addition to the velocity, other quantities are also derived, the most important being a value (dW) that can be understood (for a Gaussian line) as a change in the line FWHM. These values provide useful stellar activity indicators. LBL works on data from a variety of instruments, including SPIRou, NIRPS, HARPS, and ESPRESSO. The code's output is an rdb table that can be uploaded to the online DACE pRV analysis tool.

Would you like to view a random code?