Readers#

pyathena.read_vtk(filename, id0_only=False)[source]#

Convenience wrapper function to read Athena vtk output file using AthenaDataSet class.

Parameters
  • filename (str) – Name of the file to open, including extension

  • id0_only (bool) – Flag to enforce to read vtk file in id0 directory only. Default value is False.

Returns

ds

Return type

AthenaDataSet

pyathena.read_athinput(filename, as_namedtuple=False, verbose=False)[source]#

Function to read athinput and configure block from simulation log

Parameters
  • filename (string) – Name of the file to open, including extension

  • verbose (bool) – Print verbose message

Returns

par – Each item is a dictionary or namedtuple containing individual input block.

Return type

dict or namedtuple

pyathena.read_hst(filename, force_override=False, verbose=False)[source]#

Function to read athena history file and pickle

Parameters
  • filename (string) – Name of the file to open, including extension

  • force_override (bool) – Flag to force read of hst file even when pickle exists

  • verbose (bool) – Print verbose messages

Returns

hst – Each column contains time series data

Return type

pandas dataframe

pyathena.read_sphst(filename, vlist=['time', 'mass', 'x1', 'x2', 'x3', 'v1', 'v2', 'v3', 'age', 'mage', 'mdot', 'mhist', 'active', 'metal0', 'metal1', 'metal2', 'metal3', 'metal4', 'metal5', 'metal6'], force_override=False, verbose=False)[source]#

Function to read athena history file and pickle

Parameters
  • filename (string) – Name of the file to open, including extension

  • force_override (bool) – Flag to force read of hst file even when pickle exists

  • verbose (bool) – Print verbose messages

Returns

hst – Each column contains time series data

Return type

pandas dataframe

pyathena.read_timeit(filename, force_override=False, verbose=False)[source]#

Function to read timeit log file and pickle

Parameters
  • filename (string) – Name of the file to open, including extension

  • force_override (bool) – Flag to force read of hst file even when pickle exists

  • verbose (bool) – Print verbose messages

Returns

hst – Each column contains time series data

Return type

pandas dataframe

pyathena.read_starpar_vtk(filename, force_override=False, verbose=False)[source]#

Read athena starpar vtk output. Returns a dictionary containing mass, position, velocity, age, etc.

Parameters
  • filename (string) – Name of the file to open, including extension

  • force_override (bool) – Flag to force read of hst file even when pickle exists

Returns

df – Pandas DataFrame object

Return type

dict

pyathena.read_zprof(filename, force_override=False, verbose=False)[source]#

Function to read one zprof file and pickle

Parameters
  • filename (string) – Name of the file to open, including extension

  • force_override (bool) – Flag to force read of zprof file even when pickle exists

Returns

df

Return type

pandas dataframe

pyathena.read_zprof_all(dirname, problem_id, phase='whole', savdir=None, force_override=False)[source]#

Function to read all zprof files in directory and make a Dataset object and write to a NetCDF file.

Note: An xarray DataArray holds a single multi-dimensional variable and its coordinates, while a xarray Dataset holds multiple variables that potentially share the same coordinates.

Parameters
  • dirname (str) – Name of the directory where zprof files are located

  • problem_id (str) – Prefix of zprof files

  • phase (str) – Name of thermal phase ex) whole, phase1, …, phase5 (cold, intermediate, warm, hot1, hot2)

  • savdir (str) – Name of directory to save pickle data as a netcdf file Default value is dirname.

  • force_override (bool) – Flag to force read of hst file even when netcdf exists

Returns

ds

Return type

xarray dataset

pyathena.read_rst(filename, verbose=False)[source]#

Wrapper function to return RestartHandler class to read/handle restart file

pyathena.read_hdf5(filename, header_only=False, chunks=None, raw=False, num_ghost=0, **kwargs)[source]#

Read Athena hdf5 file and convert it to xarray Dataset

Parameters
  • filename (str) – Data filename

  • header_only (bool) – Flag to read only attributes, not data.

  • chunks ((dict or None), default: None) – If provided, used to load the data into dask arrays.

  • raw (bool, optional) – If True, return raw data without merging MeshBlocks into a single array. Default is False.

  • num_ghost (int, optional) – Number of ghost zones to include in the data. Default is 0.

  • **kwargs (dict, optional) – Extra arguments passed to athdf. Refer to athdf documentation for a list of all possible arguments.

Returns

ds – Fluid data

Return type

xarray.Dataset

See also

io.athena_read.athdf, load_sim.LoadSim.load_hdf5

Examples

>>> from pyathena.io import read_hdf5
>>> ds = read_hdf5("/path/to/hdf/file")
>>> from pyathena.load_sim import LoadSim
>>> s = LoadSim("/path/to/basedir")
>>> ds = read_hdf5(s.files['hdf5']['prim'][30])

Particle Readers#

pyathena.read_partab(filename, **kwargs)[source]#

Read Athena++ particle tab file (par?.tab) into a DataFrame.

Columns are read from the file header. The DataFrame index is set to the particle id (pid) and sorted.

Parameters
  • filename (str) – Path to the .tab particle output file.

  • **kwargs (dict, optional) – Extra arguments passed to partab() (e.g., raw=True to skip column-name parsing).

Returns

ds – Particle data indexed by pid.

Return type

pandas.DataFrame

Examples

>>> from pyathena import read_partab
>>> df = read_partab("/path/to/par0.tab")
>>> df.head()
pyathena.read_parhst(filename, **kwargs)[source]#

Read an individual particle history file into a DataFrame.

The file is expected to be a CSV-style text file produced by Athena++ for tracking individual particle trajectories over time.

Parameters
  • filename (str) – Path to the particle history file.

  • **kwargs (dict, optional) – Extra arguments passed to pandas.read_csv() (e.g., sep, header, names).

Returns

ds – Individual particle history.

Return type

pandas.DataFrame

RestartHandler#

class pyathena.RestartHandler(filename, verbose=False)[source]#
cut_z(zmin, zmax)[source]#

Cut vertical domain and store it to data_target

degrade(check_divB=True)[source]#

Degrade data and store it to data_target

pop_scalar(ipop)[source]#

Pop scalar

read(verbose=None)[source]#

Read full data

refine(check_divB=True)[source]#

Refine data and store it to data_target

remove_all_targets()[source]#

Remove previously manipulated data

reset_grids(ngrid=None)[source]#

Reset grid list with new grid size

Parameters

ngrid (array like) – grid dimension [nx, ny, nz]

reset_par(pid=None)[source]#

Reset input parameters. May not be able to handle this fully automatically

write(outdir=None, pid='newrst', itime=0)[source]#

Write target data

TimingReader#

class pyathena.TimingReader(basedir, problem_id)[source]#

Reader for Athena++ timing output files.

Reads <problem_id>.loop_time.txt and <problem_id>.task_time.txt from the simulation base directory and exposes them as pandas DataFrames via load_loop_time() and load_task_time().

Parameters
  • basedir (str) – Base directory of the simulation.

  • problem_id (str) – Problem ID used to locate timing files.

Examples

>>> from pyathena import TimingReader
>>> tr = TimingReader("/path/to/basedir", "problem_id")
>>> df_loop = tr.load_loop_time()
>>> df_tasks = tr.load_task_time(groups=['Hydro', 'Primitives'])
load_loop_time()[source]#

Read .loop_time.txt file

Returns

The breakdown of each step of the main loop including Before, TimeIntegratorTaskList, SelfGravity, After

Return type

pandas.DataFrame

load_task_time(groups=None)[source]#

Read .task_time.txt file

Parameters

groups (list) – If provided, group tasks that have the same string in the list everything else will be summed and stored in ‘Others’. e.g., [‘Hydro’,’Primitives’,’UserWork’]

Returns

The breakdown of time taken by each task of the time integrator

Return type

dictionary of pandas.DataFrame