new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Dec 29

A catalog of ringed galaxies in the TNG50 simulation: Analysis of their properties and structure

The catalog of ringed galaxies was compiled through visual classification of synthetic images from the TNG50 simulation. Galaxies were selected based on specific criteria: a redshift range of 0.01 < z < 0.1, stellar mass M_star >10^9 M_odot, stellar half-mass radius r_{50} > 1 kpc, and specific star formation rate (sSFR), log(sSFR/yr^{-1}) > -13. Our classification allowed for differentiation between inner rings, outer rings, combinations of rings, and partial rings (pseudo-rings), including barred and non-barred ringed galaxies. We constructed a control sample of non-ringed galaxies with similar redshift, stellar mass, and environmental density distributions. We identified 807 ringed galaxies. Approximately 59% possess an inner ring, 22% a partial ring, 12% an outer ring, and 7% have i+o rings. Our statistical analysis reveals that 64% (507 galaxies) exhibit bars. Ringed galaxies exhibit lower efficiency for star formation, reduced gas fractions, redder colors, and higher metallicities compared to non-ringed disk objects. They also show greater variability in metallicity for a given stellar mass. From the analysis of radial profiles, galaxies with outer rings exhibit a r_{50} similar to or slightly larger than their control group, while those with inner or partial rings tend to have smaller sizes. A deeper exploration of radial density profiles revealed a pronounced central mass deficit preceding the ring structures, with inner and outer rings located at r_{50} and 1.5 , r_{50}, respectively. Galaxies with both i+o rings have inner rings that are more compact and massive. Additionally, galaxies with partial rings exhibit deeper mass profiles than their controls, particularly in central areas. These findings improve our understanding of galactic evolution and the complex interplay between mass distribution and morphology.

  • 4 authors
·
Nov 23, 2024

Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in High-Frequency Trading: A Comprehensive Exploration

The realm of High-Frequency Trading (HFT) is characterized by rapid decision-making processes that capitalize on fleeting market inefficiencies. As the financial markets become increasingly competitive, there is a pressing need for innovative strategies that can adapt and evolve with changing market dynamics. Enter Reinforcement Learning (RL), a branch of machine learning where agents learn by interacting with their environment, making it an intriguing candidate for HFT applications. This paper dives deep into the integration of RL in statistical arbitrage strategies tailored for HFT scenarios. By leveraging the adaptive learning capabilities of RL, we explore its potential to unearth patterns and devise trading strategies that traditional methods might overlook. We delve into the intricate exploration-exploitation trade-offs inherent in RL and how they manifest in the volatile world of HFT. Furthermore, we confront the challenges of applying RL in non-stationary environments, typical of financial markets, and investigate methodologies to mitigate associated risks. Through extensive simulations and backtests, our research reveals that RL not only enhances the adaptability of trading strategies but also shows promise in improving profitability metrics and risk-adjusted returns. This paper, therefore, positions RL as a pivotal tool for the next generation of HFT-based statistical arbitrage, offering insights for both researchers and practitioners in the field.

  • 1 authors
·
Sep 13, 2023

LLM Agent-Based Simulation of Student Activities and Mental Health Using Smartphone Sensing Data

Students' mental well-being is vital for academic success, with activities such as studying, socializing, and sleeping playing a role. Current mobile sensing data highlight this intricate link using statistical and machine learning analyses. We propose a novel LLM agent-based simulation framework to model student activities and mental health using the StudentLife Dataset. Each LLM agent was initialized with personality questionnaires and guided by smartphone sensing data throughout the simulated semester. These agents predict individual behaviors, provide self-reported mental health data via ecological momentary assessments (EMAs), and complete follow-up personality questionnaires. To ensure accuracy, we investigated various prompting techniques, memory systems, and activity-based mental state management strategies that dynamically update an agent's mental state based on their daily activities. This simulation goes beyond simply replicating existing data. This allows us to explore new scenarios that are not present in the original dataset, such as peer influence through agent-to-agent interactions and the impact of social media. Furthermore, we can conduct intervention studies by manipulating activity patterns via sensing signals and personality traits using questionnaire responses. This provides valuable insights into the behavioral changes that could enhance student well-being. The framework also facilitates hypothetical interviews with LLM agents, offering deeper insights into their mental health. This study showcases the power of LLM-driven behavioral modeling with sensing data, opening new avenues for understanding and supporting student mental health.

Character-lab Character-lab
·
Jul 16

TRADES: Generating Realistic Market Simulations with Diffusion Models

Financial markets are complex systems characterized by high statistical noise, nonlinearity, and constant evolution. Thus, modeling them is extremely hard. We address the task of generating realistic and responsive Limit Order Book (LOB) market simulations, which are fundamental for calibrating and testing trading strategies, performing market impact experiments, and generating synthetic market data. Previous works lack realism, usefulness, and responsiveness of the generated simulations. To bridge this gap, we propose a novel TRAnsformer-based Denoising Diffusion Probabilistic Engine for LOB Simulations (TRADES). TRADES generates realistic order flows conditioned on the state of the market, leveraging a transformer-based architecture that captures the temporal and spatial characteristics of high-frequency market data. There is a notable absence of quantitative metrics for evaluating generative market simulation models in the literature. To tackle this problem, we adapt the predictive score, a metric measured as an MAE, by training a stock price predictive model on synthetic data and testing it on real data. We compare TRADES with previous works on two stocks, reporting an x3.27 and x3.47 improvement over SoTA according to the predictive score, demonstrating that we generate useful synthetic market data for financial downstream tasks. We assess TRADES's market simulation realism and responsiveness, showing that it effectively learns the conditional data distribution and successfully reacts to an experimental agent, giving sprout to possible calibrations and evaluations of trading strategies and market impact experiments. We developed DeepMarket, the first open-source Python framework for market simulation with deep learning. Our repository includes a synthetic LOB dataset composed of TRADES's generates simulations. We release the code at github.com/LeonardoBerti00/DeepMarket.

  • 3 authors
·
Jan 31

Suppressing the sample variance of DESI-like galaxy clustering with fast simulations

Ongoing and upcoming galaxy redshift surveys, such as the Dark Energy Spectroscopic Instrument (DESI) survey, will observe vast regions of sky and a wide range of redshifts. In order to model the observations and address various systematic uncertainties, N-body simulations are routinely adopted, however, the number of large simulations with sufficiently high mass resolution is usually limited by available computing time. Therefore, achieving a simulation volume with the effective statistical errors significantly smaller than those of the observations becomes prohibitively expensive. In this study, we apply the Convergence Acceleration by Regression and Pooling (CARPool) method to mitigate the sample variance of the DESI-like galaxy clustering in the AbacusSummit simulations, with the assistance of the quasi-N-body simulations FastPM. Based on the halo occupation distribution (HOD) models, we construct different FastPM galaxy catalogs, including the luminous red galaxies (LRGs), emission line galaxies (ELGs), and quasars, with their number densities and two-point clustering statistics well matched to those of AbacusSummit. We also employ the same initial conditions between AbacusSummit and FastPM to achieve high cross-correlation, as it is useful in effectively suppressing the variance. Our method of reducing noise in clustering is equivalent to performing a simulation with volume larger by a factor of 5 and 4 for LRGs and ELGs, respectively. We also mitigate the standard deviation of the LRG bispectrum with the triangular configurations k_2=2k_1=0.2 h/Mpc by a factor of 1.6. With smaller sample variance on galaxy clustering, we are able to constrain the baryon acoustic oscillations (BAO) scale parameters to higher precision. The CARPool method will be beneficial to better constrain the theoretical systematics of BAO, redshift space distortions (RSD) and primordial non-Gaussianity (NG).

  • 47 authors
·
Apr 3, 2024

On the statistical theory of self-gravitating collisionless dark matter flow: Scale and redshift variation of velocity and density distributions

This paper studies the scale and redshift variation of density and velocity distributions in self-gravitating collisionless dark matter flow by a halo-based non-projection approach. All particles are divided into halo and out-of-halo particles for redshift variation of distributions. Without projecting particle fields onto a structured grid, the scale variation is analyzed by identifying all particle pairs on different scales r. We demonstrate that: i) Delaunay tessellation can be used to reconstruct the density field. The density correlation, spectrum, and dispersion functions were obtained, modeled, and compared with the N-body simulation; ii) the velocity distributions are symmetric on both small and large scales and are non-symmetric with a negative skewness on intermediate scales due to the inverse energy cascade at a constant rate varepsilon_u; iii) On small scales, the even order moments of pairwise velocity Delta u_L follow a two-thirds law (-varepsilon_ur)^{2/3}, while the odd order moments follow a linear scaling langle(Delta u_L)^{2n+1}rangle=(2n+1)langle(Delta u_L)^{2n}ranglelangleDelta u_Lrangler; iv) The scale variation of the velocity distributions was studied for longitudinal velocities u_L or u_L^{'}, pairwise velocity (velocity difference) Delta u_L=u_L^{'}-u_L and velocity sum Sigma u_L=u^{'}_L+u_L. Fully developed velocity fields are never Gaussian on any scale, despite that they can initially be Gaussian; v) On small scales, u_L and Sigma u_L can be modeled by a X distribution to maximize the system entropy; vi) On large scales, Delta u_L and Sigma u_L can be modeled by a logistic or a X distribution; vii) the redshift variation of the velocity distributions follows the evolution of the X distribution involving a shape parameter alpha(z) decreasing with time.

  • 1 authors
·
Feb 14, 2022

Tracing cosmic voids with fast simulations

Context. Cosmic voids are vast underdense regions in the cosmic web that encode crucial information about structure formation, the composition of the Universe, and its expansion history. Due to their lower density, these regions are less affected by non-linear gravitational dynamics, making them suitable candidates for analysis using semi-analytic methods. Aims. We assess the accuracy of the PINOCCHIO code, a fast tool for generating dark matter halo catalogs based on Lagrangian Perturbation Theory, in modeling the statistical properties of cosmic voids. We validate this approach by comparing the resulting void statistics measured from PINOCCHIO to those obtained from N-body simulations. Methods. We generate a set of simulations using PINOCCHIO and OpenGADGET3, assuming a fiducial cosmology and varying the resolution. For a given resolution, the simulations share the same initial conditions between the different simulation codes. Snapshots are saved at multiple redshifts for each simulation and post-processed using the watershed void finder VIDE to identify cosmic voids. For each simulation code, we measure the following statistics: void size function, void ellipticity function, core density function, and the void radial density profile. We use these statistics to quantify the accuracy of PINOCCHIO relative to OpenGADGET3 in the context of cosmic voids. Results. We find agreement for all void statistics at better than 2{\sigma} between PINOCCHIO and OpenGADGET3, with no systematic difference in redshift trends. This demonstrates that the PINOCCHIO code can reliably produce void statistics with high computational efficiency compared to full N-body simulations.

  • 6 authors
·
Jun 24

Uncertainty Quantification for Multi-fidelity Simulations

The work focuses on gathering high-fidelity and low-fidelity numerical simulations data using Nektar++ (Solver based on Applied Mathematics) and XFOIL respectively. The utilization of the higher polynomial distribution in calculating the Coefficient of lift and drag has demonstrated superior accuracy and precision. Further, Co-kriging Data fusion and Adaptive sampling technique has been used to obtain the precise data predictions for the lift and drag within the confined domain without conducting the costly simulations on HPC clusters. This creates a methodology to quantifying uncertainty in computational fluid dynamics by minimizing the required number of samples. To minimize the reliability on high-fidelity numerical simulations in Uncertainty Quantification, a multi-fidelity strategy has been adopted. The effectiveness of the multi-fidelity deep neural network model has been validated through the approximation of benchmark functions across 1-, 32-, and 100-dimensional, encompassing both linear and nonlinear correlations. The surrogate modelling results showed that multi-fidelity deep neural network model has shown excellent approximation capabilities for the test functions and multi-fidelity deep neural network method has outperformed Co-kriging in effectiveness. In addition to that, multi-fidelity deep neural network model is utilized for the simulation of aleatory uncertainty propagation in 1-, 32-, and 100 dimensional function test, considering both uniform and Gaussian distributions for input uncertainties. The results have shown that multi-fidelity deep neural network model has efficiently predicted the probability density distributions of quantities of interest as well as the statistical moments with precision and accuracy. The Co-Kriging model has exhibited limitations when addressing 32-Dimension problems due to the limitation of memory capacity for storage and manipulation.

  • 1 authors
·
Mar 11

First Light And Reionisation Epoch Simulations (FLARES) XVI: Size Evolution of Massive Dusty Galaxies at Cosmic Dawn from UV to IR

We use the First Light And Reionisation Epoch Simulations (FLARES) to study the evolution of the rest-frame ultraviolet (UV) and far-infrared (FIR) sizes for a statistical sample of massive (gtrsim10^{9}M_{odot}) high redshift galaxies (z in [5,10]). Galaxies are post-processed using the SKIRT radiative transfer code, to self-consistently obtain the full spectral energy distribution and surface brightness distribution. We create mock observations of the galaxies for the Near Infrared Camera (NIRCam) to study the rest-frame UV 1500 xC5 morphology. We also generate mock rest-frame FIR (50 mum) photometry and mock ALMA (158 mum) (0.01"-0.03" and approx0.3" angular resolution) observations to study the dust-continuum. We find the effect of dust on observed sizes reduces with increasing wavelength from the UV to optical (sim0.6 times the UV at 0.4mum), with no evolution in FIR sizes. Observed sizes vary within 0.4-1.2 times the intrinsic sizes at different signal to noise ratios (SNR = 5-20) across redshifts. The effect of PSF and noise makes bright structures prominent, whereas fainter regions blend with noise, leading to an underestimation (factor of 0.4-0.8) of sizes at SNR=5. At SNR=15-20, the underestimation reduces (factor of 0.6-0.9) at z=5-8 but due to PSF, at z=9-10, bright cores are dominant, resulting in an overestimation (factor of 1.0-1.2). For ALMA, low resolution sizes are effected by noise which acts as extended emission. The size evolution in UV broadly agrees with current observational samples and other simulations. This work is one of the first to analyse the panchromatic sizes of a statistically significant sample of simulated high-redshift galaxies, complementing a growing body of research highlighting the importance of conducting an equivalent comparison between observed galaxies and their simulated counterparts in the early Universe.

  • 12 authors
·
Aug 20, 2024

Information Theory and Statistical Mechanics Revisited

The statistical mechanics of Gibbs is a juxtaposition of subjective, probabilistic ideas on the one hand and objective, mechanical ideas on the other. In this paper, we follow the path set out by Jaynes, including elements added subsequently to that original work, to explore the consequences of the purely statistical point of view. We show how standard methods in the equilibrium theory could have been derived simply from a description of the available problem information. In addition, our presentation leads to novel insights into questions associated with symmetry and non-equilibrium statistical mechanics. Two surprising consequences to be explored in further work are that (in)distinguishability factors are automatically predicted from the problem formulation and that a quantity related to the thermodynamic entropy production is found by considering information loss in non-equilibrium processes. Using the problem of ion channel thermodynamics as an example, we illustrate the idea of building up complexity by successively adding information to create progressively more complex descriptions of a physical system. Our result is that such statistical mechanical descriptions can be used to create transparent, computable, experimentally-relevant models that may be informed by more detailed atomistic simulations. We also derive a theory for the kinetic behavior of this system, identifying the nonequilibrium `process' free energy functional. The Gibbs relation for this functional is a fluctuation-dissipation theorem applicable arbitrarily far from equilibrium, that captures the effect of non-local and time-dependent behavior from transient driving forces. Based on this work, it is clear that statistical mechanics is a general tool for constructing the relationships between constraints on system information.

  • 3 authors
·
May 27, 2011

Applicability and Surrogacy of Uncorrelated Airspace Encounter Models at Low Altitudes

The National Airspace System (NAS) is a complex and evolving system that enables safe and efficient aviation. Advanced air mobility concepts and new airspace entrants, such as unmanned aircraft, must integrate into the NAS without degrading overall safety or efficiency. For instance, regulations, standards, and systems are required to mitigate the risk of a midair collision between aircraft. Monte Carlo simulations have been a foundational capability for decades to develop, assess, and certify aircraft conflict avoidance systems. These are often validated through human-in-the-loop experiments and flight testing. For many aviation safety studies, manned aircraft behavior is represented using dynamic Bayesian networks. The original statistical models were developed from 2008-2013 to support safety simulations for altitudes above 500 feet Above Ground Level (AGL). However, these models were not sufficient to assess the safety of smaller UAS operations below 500 feet AGL. In response, newer models with altitude floors below 500 feet AGL have been in development since 2018. Many of the models assume that aircraft behavior is uncorrelated and not dependent on air traffic services or nearby aircraft. Our research objective was to compare the various uncorrelated models of conventional aircraft and identify how the models differ. Particularly if models of rotorcraft were sufficiently different than models of fixed-wing aircraft to require type specific models. The primary contribution is guidance on which uncorrelated models to leverage when evaluating the performance of a collision avoidance system designed for low altitude operations. We also address which models can be surrogates for noncooperative aircraft without transponders.

  • 2 authors
·
Mar 4, 2021

Adaptive Sampling Strategies to Construct Equitable Training Datasets

In domains ranging from computer vision to natural language processing, machine learning models have been shown to exhibit stark disparities, often performing worse for members of traditionally underserved groups. One factor contributing to these performance gaps is a lack of representation in the data the models are trained on. It is often unclear, however, how to operationalize representativeness in specific applications. Here we formalize the problem of creating equitable training datasets, and propose a statistical framework for addressing this problem. We consider a setting where a model builder must decide how to allocate a fixed data collection budget to gather training data from different subgroups. We then frame dataset creation as a constrained optimization problem, in which one maximizes a function of group-specific performance metrics based on (estimated) group-specific learning rates and costs per sample. This flexible approach incorporates preferences of model-builders and other stakeholders, as well as the statistical properties of the learning task. When data collection decisions are made sequentially, we show that under certain conditions this optimization problem can be efficiently solved even without prior knowledge of the learning rates. To illustrate our approach, we conduct a simulation study of polygenic risk scores on synthetic genomic data -- an application domain that often suffers from non-representative data collection. We find that our adaptive sampling strategy outperforms several common data collection heuristics, including equal and proportional sampling, demonstrating the value of strategic dataset design for building equitable models.

  • 7 authors
·
Jan 31, 2022

Parallel Learning by Multitasking Neural Networks

A modern challenge of Artificial Intelligence is learning multiple patterns at once (i.e.parallel learning). While this can not be accomplished by standard Hebbian associative neural networks, in this paper we show how the Multitasking Hebbian Network (a variation on theme of the Hopfield model working on sparse data-sets) is naturally able to perform this complex task. We focus on systems processing in parallel a finite (up to logarithmic growth in the size of the network) amount of patterns, mirroring the low-storage level of standard associative neural networks at work with pattern recognition. For mild dilution in the patterns, the network handles them hierarchically, distributing the amplitudes of their signals as power-laws w.r.t. their information content (hierarchical regime), while, for strong dilution, all the signals pertaining to all the patterns are raised with the same strength (parallel regime). Further, confined to the low-storage setting (i.e., far from the spin glass limit), the presence of a teacher neither alters the multitasking performances nor changes the thresholds for learning: the latter are the same whatever the training protocol is supervised or unsupervised. Results obtained through statistical mechanics, signal-to-noise technique and Monte Carlo simulations are overall in perfect agreement and carry interesting insights on multiple learning at once: for instance, whenever the cost-function of the model is minimized in parallel on several patterns (in its description via Statistical Mechanics), the same happens to the standard sum-squared error Loss function (typically used in Machine Learning).

  • 4 authors
·
Aug 8, 2023

Dense Hebbian neural networks: a replica symmetric picture of supervised learning

We consider dense, associative neural-networks trained by a teacher (i.e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters such as quality and quantity of the training dataset, network storage and noise, that is valid in the limit of large network size and structureless datasets: these networks may work in a ultra-storage regime (where they can handle a huge amount of patterns, if compared with shallow neural networks) or in a ultra-detection regime (where they can perform pattern recognition at prohibitive signal-to-noise ratios, if compared with shallow neural networks). Guided by the random theory as a reference framework, we also test numerically learning, storing and retrieval capabilities shown by these networks on structured datasets as MNist and Fashion MNist. As technical remarks, from the analytic side, we implement large deviations and stability analysis within Guerra's interpolation to tackle the not-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensors, overall obtaining a novel and broad approach to investigate supervised learning in neural networks, beyond the shallow limit, in general.

  • 8 authors
·
Nov 25, 2022

Implicit factorized transformer approach to fast prediction of turbulent channel flows

Transformer neural operators have recently become an effective approach for surrogate modeling of systems governed by partial differential equations (PDEs). In this paper, we introduce a modified implicit factorized transformer (IFactFormer-m) model which replaces the original chained factorized attention with parallel factorized attention. The IFactFormer-m model successfully performs long-term predictions for turbulent channel flow, whereas the original IFactFormer (IFactFormer-o), Fourier neural operator (FNO), and implicit Fourier neural operator (IFNO) exhibit a poor performance. Turbulent channel flows are simulated by direct numerical simulation using fine grids at friction Reynolds numbers Re_{tau}approx 180,395,590, and filtered to coarse grids for training neural operator. The neural operator takes the current flow field as input and predicts the flow field at the next time step, and long-term prediction is achieved in the posterior through an autoregressive approach. The results show that IFactFormer-m, compared to other neural operators and the traditional large eddy simulation (LES) methods including dynamic Smagorinsky model (DSM) and the wall-adapted local eddy-viscosity (WALE) model, reduces prediction errors in the short term, and achieves stable and accurate long-term prediction of various statistical properties and flow structures, including the energy spectrum, mean streamwise velocity, root mean square (rms) values of fluctuating velocities, Reynolds shear stress, and spatial structures of instantaneous velocity. Moreover, the trained IFactFormer-m is much faster than traditional LES methods. By analyzing the attention kernels, we elucidate the reasons why IFactFormer-m converges faster and achieves a stable and accurate long-term prediction compared to IFactFormer-o. Code and data are available at: https://github.com/huiyu-2002/IFactFormer-m.

  • 3 authors
·
Dec 25, 2024

The Necessity of Imperfection:Reversing Model Collapse via Simulating Cognitive Boundedness

Although synthetic data is widely promoted as a remedy, its prevailing production paradigm -- one optimizing for statistical smoothness -- systematically removes the long-tail, cognitively grounded irregularities that characterize human text. Prolonged training on such statistically optimal but cognitively impoverished data accelerates model collapse. This paper proposes a paradigm shift: instead of imitating the surface properties of data, we simulate the cognitive processes that generate human text. We introduce the Prompt-driven Cognitive Computing Framework (PMCSF), whose core consists of a Cognitive State Decoder (CSD) that reverse-engineers unstructured text into structured cognitive vectors, and a Cognitive Text Encoder (CTE) that re-materializes these states into text enriched with human-typical imperfections via mathematically defined Cognitive Perturbation Operators. The framework is validated through a two-stage objective evaluation pipeline. First, in cognitive codec verification, CTE text yields a Jensen-Shannon divergence of 0.0614 from human text (vs. 0.4431 for standard LLM output), passes double-blind professional media review, and achieves an intraclass correlation coefficient ICC > 0.9 for cognitive profile alignment across heterogeneous models. Second, in functional gain evaluation, isomorphic stress tests in the A-share market show that strategies incorporating CTE-generated data reduce maximum drawdown by 47.4% during the 2015 crash and deliver 8.6% Defensive Alpha, exceeding transaction costs by a factor of 33. Our findings demonstrate that modelling human cognitive limitations -- not copying surface data -- enables synthetic data with genuine functional gain, offering a viable technical pathway toward resolving the AI data-collapse crisis.

  • 1 authors
·
Dec 1