Anna Horváth, Emese Forgács-Dajka, Gergely Gábor Barnaföldi (2024.12.01 - 2025.03.31)
Abstract: Compact stars in the Kaluza-Klein space-time are investigated, with multiple additional compactified spatial dimensions (d). Within the extended phenomenological model, a static, spherically symmetric solution is considered, with the equation of state provided by a zero temperature, interacting multi-dimensional Fermi gas. The maximal masses of compact stars are calculated for different model parameters. We investigate the effect of the existence of multiple extra compactified dimensions within the Kaluza--Klein compact star structure. We investigate the effect of the number of extra dimensions in comparison with the effect of the excitation number.
Anna Horváth, Aneta Magdalena Wojnar, Gergely Gábor Barnaföldi (2024.12.01 - 2025.03.31)
Abstract: We investigate the behaviour of massive and massless particles in strong gravitational field, with one extra spatial compactified dimension. We study a Schwarzschild-like solution in the Kaluza-Klein model, and the possible modifications to observables in general relativity. Curvature and the uncertainty relation could be modified, leading to an altered thermodynamics.
Örs Legeza (2023.11.01 - 2024.11.30)
Wigner Research Centre for Physics
Publications:
[1] Parallel implementation of the Density Matrix Renormalization Group method achieving a quarter petaFLOPS performance on a single DGX-H100 GPU node
[2] Two-dimensional quantum lattice models via mode optimized hybrid CPU-GPU density matrix renormalization group method
[3] Boosting the effective performance of massively parallel tensor network state algorithms on hybrid CPU-GPU based architectures via non-Abelian symmetries
[4] Massively Parallel Tensor Network State Algorithms on Hybrid CPU-GPU Based Architectures
[5] Cost optimized ab initio tensor network state methods: industrial perspectives
Abstract: Numerical simulation of quantum systems in which correlations between electrons are strong, i.e., they cannot be described by perturbation theory is in the focus of modern physics and chemistry. This, however, poses major challenge as the computational complexity usually scales exponentially with system size. Therefore, those algorithms in which such scaling can be reduced to polynomial form is subject of intense research.
The density matrix renormalization group (DMRG) method fulfills such criteria. In addition, the related matrix and tensor algebra can be organized into millions of independent subtaks, that makes the method ideal for massive parallelization. Using our code, during the first phase of the project (2021-2022) we have already performed large scale simulations on various quantum systems which lead to two publications accessible on arXiv:
[1] Massively Parallel Tensor Network State Algorithms on Hybrid CPU-GPU Based Architectures, Andor Menczer, Örs Legeza, arXiv:2305.05581 (2023)
[2] Boosting the effective performance of massively parallel tensor network state algorithms on hybrid CPU-GPU based architectures via non-Abelian symmetries, Andor Menczer, Örs Legeza, arXiv:2309.16724 (2023)
The GPU Laboratory is explicitly cited in the acknowledgement in Ref.[1] as part of the results were generated via project phase-1. In the second phase of the project we aim to further test our simulations using A100 GPU based infrastructure. Depending on the results we intend to update or extend results reported in Ref.[2].
Péter Maller\(^1\), Emese Forgács-Dajka\(^1\), Dániel Berényi\(^2\) (2024.10.01. - 2025.02.28.)
- Eötvös Loránd University
- Freelancer
Abstract: The main goal of the project is to parallelize a well-known Babcock-Leighton solar dynamo model, which can be used to study the development of the Sun's global magnetic field, thus solar activity. The prediction of solar activity is still challenging, as a quasi-periodic, stochastic process is in the background. In addition, the dynamo, which describes the underlying physics, is still one of the great unsolved problems of astrophysics. Of course, this does not mean that we do not have ideas or even models regarding the development of the magnetic field, but these models require further investigation and development.
The numerical code we created was based on an earlier Fortran language program, developed several decades ago, the modernization of which was motivated by several things: on the one hand, there are redundancies in the code written by many people over a long period of time, but also parts that are apparently redundant, on the other hand, further development is difficult due to the structure of the code. Thus, our first goal was to optimize and refactor the previous code, for which we chose the C programming language. Next, we want to parallelize the code, for which we use the CUDA framework. The reduction in running time achieved by parallelization enables comprehensive analyses: we can examine the development of several components of the magnetic field at a higher spatial resolution, but we can also map the parameter space of the model. Among our goals is the comparison of different numerical methods, such as ADI (Alternating-Direction Implicit) and FTCS (Forward Time-Centered Space). Overall, during the implementation of the project, we want to explore different options in order to choose the right compromise solution in terms of performance, accuracy and future improvements.
Balázs Szigeti, István Szapudi, Imre Barna, Gergely Gábor Barnaföldi (2024.08.01-2024.10.30.)
Abstract: The Hubble constant \(H_0\) characterizes the rate of the universe's expansion. The discrepancy between the low and high redshift measurements of \(H_0\) is the highest significance tension within the concordance \(\Lambda\)CDM paradigm. We show that a G\"odel inspired slowly rotating dark-fluid variant of the concordance model resolves this tension with an angular velocity today \(\omega_0 \simeq 2\times 10^{-3}\)~Gyr\(^{-1}\). Curiously, this is approximately also the maximal rotation with a tangential velocity less than the speed of light at the horizon.
Neelkamal Mallick [1], Suraj Prasad [1], Aditya Nath Mishra [2,4], Raghunath Sahoo [1] and Gergely Gábor Barnaföldi [3] (2024.05.01 - 2024.08.31)
[1] Department of Physics, Indian Institute of Technology Indore [2] Department of Physics, School of Applied Sciences, REVA University [3] Wigner Research Center for Physics [4] Department of Physics, University Centre For Research & Development (UCRD), Chandigarh University
Abstract: A nucleus having 4n number of nucleons, such as 8Be, 12C, 16O, etc., is theorized to possess clusters of α particles (4He nucleus). In this study, we exploit the anisotropic flow coefficients to discern the effects of an \(\alpha\)-clustered nuclear geometry w.r.t. a Woods-Saxon nuclear distribution at \(\sqrt{s_{NN}} = 7\) TeV LHC energy.
Antal Jakovác, Anna Horváth, Bence Dudás (2024.07.01-09.30)
Abstract: Environmental sound sample analysis using artificial intelligence methods for applied research.
Dániel Léber , Mihály Ormos (2024.07.01-09.30)
Abstract: We focus on entropy as a measure of risk and what role it can play in equilibrium asset pricing. Similar to the traditionally used capital asset pricing model (CAPM), the entropy can also be divided into mutual (a measure of the non-diversifiable risk) and conditional (a measure of the comovement with the market portfolio) components. We investigate what is the relationship between these and the conventionally used risk metrics, like standard deviation and Beta. We also propose a better solution to the notorious puzzles of asset pricing. Entropy as a measure of risk has been already described and its advantages in portfolio optimization and risk management are also acknowledged in the economic literature. We use data from the OpenBB database and Kenneth R. French’s data library to calculate daily returns and the various risk measures associated with them. We show the diversification effects of different risk measures and their stability over time. We introduce a new method to separate individual and systemic risks of the assets. We also validate our model using the conventional test of the CAPM model. Our regression-based results are tested both in-sample and out-of-sample. The robustness of our model is evaluated by both cross-validation and the use of the rolling windows over time.
Zoltán Lehóczky, Márk Bartha (2024.01.01 - 05.31)
Lombiq Ltd.
Link: GPU Day Chase Study
Abstract: GPU Day is a conference organized by the Wigner Scientific Computational Laboratory that focuses on massively parallel computing, visualization, and data analysis in both scientific and industrial applications. We also presented our Hastlayer .NET hardware accelerator project many times there too.
The website serves as an information hub for these annual conferences. It was initially running on Orchard 1 DotNest, but now it was time to migrate it to Orchard Core. While these migrations always come with certain challenges due to the new features introduced in Orchard Core, we tried to keep things easy by not changing the frontend of the site, even though it's somewhat outdated.
Szabó, Vencel (ELTE); Barbola, Milán Gábor (ELTE); Méhes, Máté (ELTE); Gábor Papp (ELTE), Bíró, Gábor (Wigner); Jólesz, Zsófia (ELTE-Wigner); Dudás, Bence (ELTE-Wigner) (2024.03.01 - 2024.06.30)
Abstract: Proton Computer Tomography (pCT) differs from the "normal" photon-based CT, since the basic reaction with matter differs: while in pCT the small angle Coulomb scattering is the dominant process, in (photon) CT the incoming photon is absorbed. That makes pCT a much harder problem.
During the project the students generate input data for the pCT algorithm, running massively the GATE simulation software on different phantoms. Evaluating the inputs with the Richardson-Lucy algorithm we determine the number of runs at different positions and angles to obtain az accetable resolution of the image. Futher plans involve the optimization of the Richardson-Lucy algorithm on GPU cluster to speed up the calculations. Furthermore, they also try to reconstruct the pCT input data from the detector outputs.
Ádám Kadlecsik (2023.11.01 - 2024.03.31)
Eötvös Loránd University
Abstract: The observed small, thus usually solid exoplanets in general orbit their central star closely - making them easier to detect with terrestrial and space instruments. This means that they must be tidally locked, meaning their orbit around their central star ("year") and their rotation around their axis ("day") have the same period. Because of the tidally locked orbit the exoplanet shows its same side to the star, thus the planet has a permanent day and night hemisphere. Ergo the flow can be modeled with a rotating layout, where the lateral boundary rotating with the water body simulating the atmosphere has an azimuthal dipole-like heat flux boundary condition. This can be investigated using experimental and simulational methods as well.