Research

Tensor methods, scalable inference, and systems for serious workloads

My current work focuses on tensor networks, distributed computing, and scaling inference for probabilistic models. I am particularly interested in the boundary between algorithmic ideas and the execution systems that make them practical.

Tensor networksScalable probabilistic inferenceDistributed computingML systems
Selected work

Featured publications

Open Google Scholar
NeurIPS2025

Exploiting Dynamic Sparsity in Einsum

Christoph Staudt, Mark Blacher, Tim Hoffmann, Kaspar Kasche, Olaf Beyersdorff, Joachim Giesen

First authorEinsumSparse tensorsML systems

Introduces a hybrid einsum execution strategy that switches tensor representations based on evolving sparsity, outperforming static dense or sparse approaches on benchmark workloads.

SEA2024

Improved Cut Strategy for Tensor Network Contraction Orders

Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, Joachim Giesen

First authorTensor networksOptimizationPlanning

Presents a new graph-cut strategy for tensor-network contraction planning that reduces floating-point cost and avoids expensive runtime hyperparameter tuning.

NeurIPS2024

Einsum Benchmark: Enabling the Development of Next-Generation Tensor Execution Engines

Mark Blacher, Christoph Staudt, Julien Klaus, Maurice Wenig, Niklas Merk, Alexander Breuer, Max Engel, Sören Laue, Joachim Giesen

BenchmarkingToolingML systems

Introduces an open benchmark suite, generators, and converters for evaluating tensor execution engines on a broader and more realistic range of einsum workloads.

All publications

Expandable on smaller screens, with paper and code links when public.

Research software

Public tooling and code ecosystems

I maintain or contribute to public research software through the TI2 group ecosystem and related tooling around tensor execution and benchmarking.

einsum.org

Tool overview

Public entry point for the tools, demos, and systems work of the research group, including the web experiences I designed and implemented.

Code-linked papers

Selected repositories

Where public repositories are available, the publication cards above link directly to code for contraction optimization and SQL-backed einsum execution.

Research recognition

Awards

Research award2025

ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning

ASPLOS / EuroSys

Awarded for systems-oriented work on high-performance intra-operator parallelism in distributed deep learning, aligning with my research on scalable execution and ML infrastructure.

Research award2014-2020

Scholarship

Studienstiftung des deutschen Volkes

Supported by the German Academic Scholarship Foundation in recognition of academic excellence and interdisciplinary promise.