Profiling
Quick profiling in your terminal
Note
This is only meant to be used for quick profiling or programmatically accessing the profiling results. For more detailed and GUI friendly profiling proceed to the next section.
Simply replace the use of Base.@time or Base.@timed with Reactant.Profiler.@time or Reactant.Profiler.@timed. We will automatically compile the function if it is not already a Reactant compiled function (with sync=true).
using Reactant
x = Reactant.to_rarray(randn(Float32, 100, 2))
W = Reactant.to_rarray(randn(Float32, 10, 100))
b = Reactant.to_rarray(randn(Float32, 10))
linear(x, W, b) = (W * x) .+ b
Reactant.@time linear(x, W, b)WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1767034401.546016 4144 profiler_session.cc:103] Profiler session initializing.
I0000 00:00:1767034401.546071 4144 profiler_session.cc:118] Profiler session started.
I0000 00:00:1767034401.546320 4144 profiler_session.cc:68] Profiler session collecting data.
I0000 00:00:1767034401.547962 4144 save_profile.cc:150] Collecting XSpace to repository: /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_21/runnervmh13bl.xplane.pb
I0000 00:00:1767034401.548142 4144 save_profile.cc:123] Creating directory: /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_21
I0000 00:00:1767034401.548255 4144 save_profile.cc:129] Dumped gzipped tool data for trace.json.gz to /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_21/runnervmh13bl.trace.json.gz
I0000 00:00:1767034401.548273 4144 profiler_session.cc:136] Profiler session tear down.
I0000 00:00:1767034401.561875 4144 stub_factory.cc:159] Created gRPC channel for address: 0.0.0.0:37937
I0000 00:00:1767034401.562180 4144 grpc_server.cc:93] Server listening on 0.0.0.0:37937
I0000 00:00:1767034401.562198 4144 xplane_to_tools_data.cc:598] serving tool: memory_profile with options: {} using ProfileProcessor
I0000 00:00:1767034401.562205 4144 xplane_to_tools_data.cc:618] Using local processing for tool: memory_profile
I0000 00:00:1767034401.562207 4144 memory_profile_processor.cc:47] Processing memory profile for host: runnervmh13bl
I0000 00:00:1767034401.761208 4144 xplane_to_tools_data.cc:598] serving tool: op_profile with options: {} using ProfileProcessor
I0000 00:00:1767034401.761237 4144 xplane_to_tools_data.cc:618] Using local processing for tool: op_profile
I0000 00:00:1767034401.761443 4144 xprof_thread_pool_executor.cc:22] Creating derived_timeline_trace_events XprofThreadPoolExecutor with 4 threads.
I0000 00:00:1767034401.764393 4144 xprof_thread_pool_executor.cc:22] Creating ProcessTensorCorePlanes XprofThreadPoolExecutor with 4 threads.
I0000 00:00:1767034401.770320 4144 xprof_thread_pool_executor.cc:22] Creating op_stats_threads XprofThreadPoolExecutor with 4 threads.
I0000 00:00:1767034401.834380 4144 xplane_to_tools_data.cc:598] serving tool: overview_page with options: {} using ProfileProcessor
I0000 00:00:1767034401.834407 4144 xplane_to_tools_data.cc:618] Using local processing for tool: overview_page
I0000 00:00:1767034401.834410 4144 overview_page_processor.cc:64] OverviewPageProcessor::ProcessSession
I0000 00:00:1767034401.834772 4144 xprof_thread_pool_executor.cc:22] Creating ConvertMultiXSpaceToInferenceStats XprofThreadPoolExecutor with 1 threads.
runtime: 0.00020817s
compile time: 1.93783097sReactant.@timed nrepeat=100 linear(x, W, b)AggregateProfilingResult(
runtime = 0.00001906s,
compile_time = 0.08956981s, )Note that the information returned depends on the backend. Specifically CUDA and TPU backends provide more detailed information regarding memory usage and allocation (something like the following will be displayed on GPUs):
AggregateProfilingResult(
runtime = 0.00001235s,
compile_time = 0.20724930s, # time spent compiling by Reactant
GPU_0_bfc = MemoryProfileSummary(
peak_bytes_usage_lifetime = 32.015 MiB, # peak memory usage over the entire program (lifetime of memory allocator)
peak_stats = MemoryAggregationStats(
stack_reserved_bytes = 0 bytes, # memory usage by stack reservation
heap_allocated_bytes = 30.750 KiB, # memory usage by heap allocation
free_memory_bytes = 4.228 GiB, # free memory available for allocation or reservation
fragmentation = 0.0, # fragmentation of memory within [0, 1]
peak_bytes_in_use = 30.750 KiB # The peak memory usage over the entire program
)
peak_stats_time = 0.02420451s,
memory_capacity = 4.228 GiB # memory capacity of the allocator
)
flops = FlopsSummary(
Flops = 5.180502680725853e-7, # [flops / (peak flops * program time)], capped at 1.0
UncappedFlops = 5.180502680725853e-7,
RawFlops = 4060.0, # Total FLOPs performed
BF16Flops = 4060.0, # Total FLOPs Normalized to the bf16 (default) devices peak bandwidth
)
)Additionally for GPUs and TPUs, we can use the Reactant.@profile macro to profile the function and get information regarding each of the kernels executed.
Reactant.@profile linear(x, W, b)I0000 00:00:1767034402.578242 4144 profiler_session.cc:103] Profiler session initializing.
I0000 00:00:1767034402.578268 4144 profiler_session.cc:118] Profiler session started.
I0000 00:00:1767034402.578341 4144 profiler_session.cc:68] Profiler session collecting data.
I0000 00:00:1767034402.579751 4144 save_profile.cc:150] Collecting XSpace to repository: /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_22/runnervmh13bl.xplane.pb
I0000 00:00:1767034402.580007 4144 save_profile.cc:123] Creating directory: /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_22
I0000 00:00:1767034402.580155 4144 save_profile.cc:129] Dumped gzipped tool data for trace.json.gz to /tmp/reactant_profile/plugins/profile/2025_12_29_18_53_22/runnervmh13bl.trace.json.gz
I0000 00:00:1767034402.580173 4144 profiler_session.cc:136] Profiler session tear down.
I0000 00:00:1767034402.580255 4144 xplane_to_tools_data.cc:598] serving tool: memory_profile with options: {} using ProfileProcessor
I0000 00:00:1767034402.580264 4144 xplane_to_tools_data.cc:618] Using local processing for tool: memory_profile
I0000 00:00:1767034402.580267 4144 memory_profile_processor.cc:47] Processing memory profile for host: runnervmh13bl
I0000 00:00:1767034402.580454 4144 xplane_to_tools_data.cc:598] serving tool: op_profile with options: {} using ProfileProcessor
I0000 00:00:1767034402.580464 4144 xplane_to_tools_data.cc:618] Using local processing for tool: op_profile
I0000 00:00:1767034402.580658 4144 xplane_to_tools_data.cc:598] serving tool: overview_page with options: {} using ProfileProcessor
I0000 00:00:1767034402.580682 4144 xplane_to_tools_data.cc:618] Using local processing for tool: overview_page
I0000 00:00:1767034402.580684 4144 overview_page_processor.cc:64] OverviewPageProcessor::ProcessSession
I0000 00:00:1767034402.580904 4144 xprof_thread_pool_executor.cc:22] Creating ConvertMultiXSpaceToInferenceStats XprofThreadPoolExecutor with 1 threads.
I0000 00:00:1767034402.697528 4144 xplane_to_tools_data.cc:598] serving tool: kernel_stats with options: {} using ProfileProcessor
I0000 00:00:1767034402.697559 4144 xplane_to_tools_data.cc:618] Using local processing for tool: kernel_stats
I0000 00:00:1767034402.878423 4144 xplane_to_tools_data.cc:598] serving tool: framework_op_stats with options: {} using ProfileProcessor
I0000 00:00:1767034402.878453 4144 xplane_to_tools_data.cc:618] Using local processing for tool: framework_op_stats
╔================================================================================╗
║ SUMMARY ║
╚================================================================================╝
AggregateProfilingResult(
runtime = 0.00001906s,
compile_time = 0.08451643s, # time spent compiling by Reactant
)On GPUs this would look something like the following:
╔================================================================================╗
║ KERNEL STATISTICS ║
╚================================================================================╝
┌───────────────────┬─────────────┬────────────────┬──────────────┬──────────────┬──────────────┬──────────────┬───────────┬──────────┬────────────┬─────────────┐
│ Kernel Name │ Occurrences │ Total Duration │ Avg Duration │ Min Duration │ Max Duration │ Static Shmem │ Block Dim │ Grid Dim │ TensorCore │ Occupancy % │
├───────────────────┼─────────────┼────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼───────────┼──────────┼────────────┼─────────────┤
│ gemm_fusion_dot_1 │ 1 │ 0.00000266s │ 0.00000266s │ 0.00000266s │ 0.00000266s │ 8.000 KiB │ 64,1,1 │ 1,1,1 │ ✗ │ 50.0% │
│ loop_add_fusion │ 1 │ 0.00000157s │ 0.00000157s │ 0.00000157s │ 0.00000157s │ 0 bytes │ 20,1,1 │ 1,1,1 │ ✗ │ 31.2% │
└───────────────────┴─────────────┴────────────────┴──────────────┴──────────────┴──────────────┴──────────────┴───────────┴──────────┴────────────┴─────────────┘
╔================================================================================╗
║ FRAMEWORK OP STATISTICS ║
╚================================================================================╝
┌───────────────────┬─────────┬─────────────┬─────────────┬─────────────────┬───────────────┬──────────┬───────────┬──────────────┬──────────┐
│ Operation │ Type │ Host/Device │ Occurrences │ Total Self-Time │ Avg Self-Time │ Device % │ Memory BW │ FLOP Rate │ Bound By │
├───────────────────┼─────────┼─────────────┼─────────────┼─────────────────┼───────────────┼──────────┼───────────┼──────────────┼──────────┤
│ gemm_fusion_dot.1 │ Unknown │ Device │ 1 │ 0.00000266s │ 0.00000266s │ 62.88% │ 1.71 GB/s │ 1.51 GFLOP/s │ HBM │
│ +/add │ add │ Device │ 1 │ 0.00000157s │ 0.00000157s │ 37.12% │ 0.12 GB/s │ 0.04 GFLOP/s │ HBM │
└───────────────────┴─────────┴─────────────┴─────────────┴─────────────────┴───────────────┴──────────┴───────────┴──────────────┴──────────┘
╔================================================================================╗
║ SUMMARY ║
╚================================================================================╝
AggregateProfilingResult(
runtime = 0.00002246s,
compile_time = 0.16447328s, # time spent compiling by Reactant
GPU_0_bfc = MemoryProfileSummary(
peak_bytes_usage_lifetime = 32.015 MiB, # peak memory usage over the entire program (lifetime of memory allocator)
peak_stats = MemoryAggregationStats(
stack_reserved_bytes = 0 bytes, # memory usage by stack reservation
heap_allocated_bytes = 31.250 KiB, # memory usage by heap allocation
free_memory_bytes = 4.228 GiB, # free memory available for allocation or reservation
fragmentation = 0.0, # fragmentation of memory within [0, 1]
peak_bytes_in_use = 31.250 KiB # The peak memory usage over the entire program
)
peak_stats_time = 0.00812043s,
memory_capacity = 4.228 GiB # memory capacity of the allocator
)
flops = FlopsSummary(
Flops = 3.747296689092735e-6, # [flops / (peak flops * program time)], capped at 1.0
UncappedFlops = 3.747296689092735e-6,
RawFlops = 4060.0, # Total FLOPs performed
BF16Flops = 4060.0, # Total FLOPs Normalized to the bf16 (default) devices peak bandwidth
)
)Capturing traces
When running Reactant, it is possible to capture traces using the XLA profiler. These traces can provide information about where the XLA specific parts of program spend time during compilation or execution. Note that tracing and compilation happen on the CPU even though the final execution is aimed to run on another device such as GPU or TPU. Therefore, including tracing and compilation in a trace will create annotations on the CPU.
Let's setup a simple function which we can then profile
using Reactant
x = Reactant.to_rarray(randn(Float32, 100, 2))
W = Reactant.to_rarray(randn(Float32, 10, 100))
b = Reactant.to_rarray(randn(Float32, 10))
linear(x, W, b) = (W * x) .+ blinear (generic function with 1 method)The profiler can be accessed using the Reactant.with_profiler function.
Reactant.with_profiler("./") do
mylinear = Reactant.@compile linear(x, W, b)
mylinear(x, W, b)
end10×2 ConcretePJRTArray{Float32,2}:
-5.41415 -4.01757
11.5022 9.40742
3.95437 -6.84381
-8.48645 -1.72537
-2.03209 3.27149
11.9087 -31.8082
4.29847 -0.419206
7.42776 -2.86893
15.7589 -5.79411
-12.6262 16.0354Running this function should create a folder called plugins in the folder provided to Reactant.with_profiler which will contain the trace files. The traces can then be visualized in different ways.
Note
For more insights about the current state of Reactant, it is possible to fetch device information about allocations using the Reactant.XLA.allocatorstats function.
Perfetto UI

The first and easiest way to visualize a captured trace is to use the online perfetto.dev tool. Reactant.with_profiler has a keyword parameter called create_perfetto_link which will create a usable perfetto URL for the generated trace. The function will block execution until the URL has been clicked and the trace is visualized. The URL only works once.
Reactant.with_profiler("./"; create_perfetto_link=true) do
mylinear = Reactant.@compile linear(x, W, b)
mylinear(x, W, b)
endNote
It is recommended to use the Chrome browser to open the perfetto URL.
Tensorboard

Another option to visualize the generated trace files is to use the tensorboard profiler plugin. The tensorboard viewer can offer more details than the timeline view such as visualization for compute graphs.
First install tensorboard and its profiler plugin:
pip install tensorboard tensorboard-plugin-profileAnd then run the following in the folder where the plugins folder was generated:
tensorboard --logdir ./Adding Custom Annotations
By default, the traces contain only information captured from within XLA. The Reactant.Profiler.annotate function can be used to annotate traces for Julia code evaluated during tracing.
Reactant.Profiler.annotate("my_annotation") do
# Do things...
endThe added annotations will be captured in the traces and can be seen in the different viewers along with the default XLA annotations. When the profiler is not activated, then the custom annotations have no effect and can therefore always be activated.