Towards efficient, resistive, multi-fluid merger simulations

  • Alex Wright
  • Ian Hawke

Non-ideal MHD is needed

MHD misses out on:

  • correct EM fields (interior/exterior);
  • magnetic reconnection;
  • accretion;
  • entrainment.

So far:

  • Resistive GRMHD by Dionysopoulou (2013), Palenzuela (2009), Qian (2016);
  • Charged multi-fluid by Andersson (2017), Amano (2016).

Difficulties

  • More realistic models can be stiff;
  • Require implicit schemes for stability:
    • E.g. IMEX (Pareschi & Russo 2004).

$$ \begin{align} \partial_t q = \mathcal{F}(q) + \frac{1}{\epsilon} \mathcal{S}(q) \end{align} $$

$$ \begin{align} q^{(i)} = q^n + \Delta t G(q^n) + \Delta t H(q^{(i)}) \end{align} $$



Performance

  • Performance depends on timescale, $\epsilon$;
  • Expect $5 \times$ slow down.




Massively parallel processors

Seeing wide use in scientific software:

  • $\sim 100 \times$ greater FLOPS than CPU;
  • Common on HPC clusters;
  • FLOPS increasing faster than CPU.


GPU-capable codes

Some examples include:

  • Wong (2011): MHD (non-relativistic);
  • Zhang (2018): AMR-MHD (non-relativistic);
  • Zink (2011): GRMHD (static spacetime).


    All examples use (R)MHD and explicit numerical methods.

    How will implicit schemes transfer?

METHOD:

  • Lightweight, multi-physics MHD code;



Porting to GPU

  • Time integration is computationally expensive;
  • Primitive recovery in each IMEX step;
  • Demands correct use of shared memory.


Performance


  • Parallel speed up of $21\times$ versus CPU
  • Further optimisations are possible/necessary
  • More details in Wright & Hawke (2019) - 'Resistive and multi-fluid RMHD on graphics processing units'.


Model extensions


    Examples from:
  • Classical turbulence simulations (LES)
  • Radiative/reactive flows
  • Radice (2017) - GRLES
  • Giacomazzo (2015) - subgrid source
    Source terms allow:
  • Easy way to add additional physics
  • Computationally cheaper than solving full model


REGIME:

A resistive extension to ideal MHD*



*in preparation


Chapman-Enskog expansion:

  • Start from resistive MHD, $\overline{q}$ and $q$ are stiff and non-stiff:

    $$ \begin{align} \partial_t q + \partial_x f(q, \overline{q}) = s(q, \overline{q}) \end{align} $$

  • Expand equations around equilibrium (ideal) solution

    $$ \begin{align} \overline{q} = \overline{q}_0 + \epsilon \overline{q}_1 + \mathcal{O}(\epsilon^2) \end{align} $$

Extension

To first order in $\epsilon$:

$$ \begin{align} \partial_t q + \partial_x f = s + \epsilon \partial_x D \end{align} $$


Features:

  • New system extends ideal MHD;
  • Stiff in opposing limit to resistive MHD as $\epsilon \propto 1/\sigma$;
  • Small contribution near ideal MHD limit.



Results:


  • Extremely good agreement with resistive MHD;
  • Expected convergence with conductivity.

Results:


  • Many factors faster than full model.


Whats next?


  • Finalizing 'A resistive extension to ideal MHD';
  • Apply to resistive GRMHD;
  • Explore application to multi-fluid equations.

Summary

  • GPUs to accelerate resistive and multi-fluid models
    • speed-up limited by memory usage;
    • shown $21\times$ speed-ups are possible;
    • improved design will allow futher accelerations.
  • Resistive extension to ideal MHD
    • completed in SR with encouraging results;
    • results match full model in a fraction of the time;
    • application to GRMHD will allow merger, accretion and MRI studies;
    • method could be useful for multi-fluid and radiative models.