NVVM Dialect
Refer to the official documentation for more details.
Reactant.MLIR.Dialects.nvvm.bar_warp_sync Method
bar_warp_sync
The nvvm.bar.warp.sync operation performs barrier synchronization for threads within a warp.
This operation causes the executing thread to wait until all threads corresponding to the mask operand have executed a bar.warp.sync with the same mask value before resuming execution.
The mask operand specifies the threads participating in the barrier, where each bit position corresponds to the thread's lane ID within the warp. Only threads with their corresponding bit set in the mask participate in the barrier synchronization.
Important constraints:
The behavior is undefined if the executing thread is not included in the mask (i.e., the bit corresponding to the thread's lane ID is not set)
For compute capability sm_6x or below, all threads in the mask must execute the same
bar.warp.syncinstruction in convergence
This operation also guarantees memory ordering among participating threads. Threads within the warp that wish to communicate via memory can store to memory, execute bar.warp.sync, and then safely read values stored by other threads in the warp.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.barrier Function
barrier
The nvvm.barrier operation performs barrier synchronization and communication within a CTA (Cooperative Thread Array). It causes executing threads to wait for all non-exited threads participating in the barrier to arrive.
The operation takes two optional operands:
barrierId: Specifies a logical barrier resource with value 0 through 15. Each CTA instance has sixteen barriers numbered 0..15. Defaults to 0 if not specified.numberOfThreads: Specifies the number of threads participating in the barrier. When specified, the value must be a multiple of the warp size. If not specified, all threads in the CTA participate in the barrier.
The barrier operation guarantees that when the barrier completes, prior memory accesses requested by participating threads are performed relative to all threads participating in the barrier. It also ensures that no new memory access is requested by participating threads before the barrier completes.
When a barrier completes, the waiting threads are restarted without delay, and the barrier is reinitialized so that it can be immediately reused.
This operation generates an aligned barrier, indicating that all threads in the CTA will execute the same barrier instruction. Behavior is undefined if all threads in the CTA do not reach this instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.barrier0 Method
barrier0
The nvvm.barrier0 operation is a convenience operation that performs barrier synchronization and communication within a CTA (Cooperative Thread Array) using barrier ID 0. It is functionally equivalent to nvvm.barrier or nvvm.barrier id=0.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.barrier_arrive Function
barrier_arrive
Thread that executes this op announces their arrival at the barrier with given id and continue their execution.
The default barrier id is 0 that is similar to nvvm.barrier Op. When barrierId is not present, the default barrier id is used.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.breakpoint Method
breakpoint
Breakpoint suspends execution of the program for debugging. For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cluster_arrive Method
cluster_arrive
The cluster.arrive can be used by the threads within the cluster for synchronization and communication. The cluster.arrive instruction marks the warps' arrival at the barrier without causing the executing thread to wait for other participating threads.
The aligned attribute, when provided, generates the .aligned version of the PTX instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cluster_arrive_relaxed Method
cluster_arrive_relaxed
The cluster.arrive can be used by the threads within the cluster for synchronization and communication. The cluster.arrive instruction marks the warps' arrival at the barrier without causing the executing thread to wait for other participating threads.
The aligned attribute, when provided, generates the .aligned version of the PTX instruction. The .relaxed qualifier on cluster.arrive specifies that there are no memory ordering and visibility guarantees provided for the memory accesses performed prior to cluster.arrive.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cluster_wait Method
cluster_wait
The cluster.wait causes the executing thread to wait for all non-exited threads of the cluster to perform cluster.arrive. The aligned attribute, when provided, generates the .aligned version of the PTX instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.convert_bf16x2_to_f8x2 Method
convert_bf16x2_to_f8x2
This Op converts the given bf16 inputs in a bf16x2 vector to the specified f8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values from a are packed such that the value converted from the first element of a is stored in the upper 8 bits of dst and the value converted from the second element of a is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The rnd and sat attributes specify the rounding and saturation modes respectively.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.convert_f16x2_to_f8x2 Method
convert_f16x2_to_f8x2
This Op converts the given f16 inputs in an f16x2 vector to the specified f8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values from a are packed such that the value converted from the first element of a is stored in the upper 8 bits of dst and the value converted from the second element of a is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The relu attribute, when set, lowers to the '.relu' variant of the cvt instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.convert_f32x2_to_f6x2 Method
convert_f32x2_to_f6x2
This Op converts each of the given float inputs to the specified fp6 type. The result dst is represented either as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values are packed such that the value converted from a is stored in the upper 8 bits of dst with 2 MSB bits padded with zeros and the value converted from b is stored in the lower 8 bits of dst with 2 MSB bits padded with zeros. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The relu attribute, when set, lowers to the '.relu' variant of the cvt instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.convert_f32x2_to_f8x2 Method
convert_f32x2_to_f8x2
This Op converts each of the given float inputs to the specified fp8 type. The result dst is represented as an i16 type or as a vector of two i8 types. If dst is returned as an i16 type, the converted values are packed such that the value converted from a is stored in the upper 8 bits of dst and the value converted from b is stored in the lower 8 bits of dst. If dst is returned as a vector type, each converted value is stored as an i8 element in the vector. The rnd and sat attributes specify the rounding and saturation modes respectively. The relu attribute, when set, lowers to the '.relu' variant of the cvt instruction.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.convert_float_to_tf32 Method
convert_float_to_tf32
This Op converts the given f32 input to tf32. The result res is represented as an i32 type. The relu attribute, when set, lowers to the '.relu' variant of the cvt instruction. The rnd and sat attributes specify the the rounding and saturation modes respectively.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_commit_group Method
cp_async_bulk_commit_group
This Op commits all prior initiated but uncommitted cp.async.bulk instructions into a cp.async.bulk-group.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_global_shared_cta Function
cp_async_bulk_global_shared_cta
Initiates an asynchronous copy operation from Shared CTA memory to global memory. The 32-bit operand size specifies the amount of memory to be copied, in terms of number of bytes. size must be a multiple of 16. The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access. The byteMask operand is optional. The i-th bit in the 16-bit wide byteMask specifies whether the i-th byte of each 16-byte wide chunk of source data is copied to the destination. If the bit is set, the byte is copied.
Example
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size
: !llvm.ptr<1>, !llvm.ptr<3>
// with l2_cache_hint
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch
: !llvm.ptr<1>, !llvm.ptr<3>
// with byte_mask
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size byte_mask = %mask
: !llvm.ptr<1>, !llvm.ptr<3>
// with both l2_cache_hint and byte_mask
nvvm.cp.async.bulk.global.shared.cta %dst, %src, %size l2_cache_hint = %ch byte_mask = %mask
: !llvm.ptr<1>, !llvm.ptr<3>For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_prefetch Function
cp_async_bulk_prefetch
Initiates an asynchronous prefetch of data from the location specified by srcMem to the L2 cache.
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
Example
nvvm.cp.async.bulk.prefetch %src, %size : !llvm.ptr<1>
// with l2_cache_hint
nvvm.cp.async.bulk.prefetch %src, %size l2_cache_hint = %ch : !llvm.ptr<1>For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_shared_cluster_global Function
cp_async_bulk_shared_cluster_global
Initiates an asynchronous copy operation from global memory to cluster's shared memory.
The multicastMask operand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. Operand multicastMask specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA.
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_shared_cluster_shared_cta Method
cp_async_bulk_shared_cluster_shared_cta
Initiates an asynchronous copy operation from Shared CTA memory to Shared cluster memory.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_tensor_global_shared_cta Function
cp_async_bulk_tensor_global_shared_cta
Initiates an asynchronous copy of the tensor data from shared::cta memory to global memory. This Op supports all the store modes specified in TMAStoreMode.
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_tensor_prefetch Function
cp_async_bulk_tensor_prefetch
Initiates an asynchronous prefetch operation on the tensor data from global memory to L2 cache. This Op supports all the load modes specified in TMALoadMode.
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_tensor_reduce Function
cp_async_bulk_tensor_reduce
Initiates an asynchronous reduction operation of tensor data in global memory with tensor data in shared memory.
The mode attribute indicates whether the copy mode is tile or im2col. The redOp attribute specifies the reduction operations applied. The supported reduction operations are:
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_tensor_shared_cluster_global Function
cp_async_bulk_tensor_shared_cluster_global
Initiates an asynchronous copy operation on the tensor data from global memory to shared memory.
The Op operates has two load modes:
- Tiled Mode: It's the default mode. The source multi-dimensional tensor
layout is preserved at the destination.
- Im2col Mode: This mode is used when
im2colOffsetsoperands are present.
the elements in the Bounding Box of the source tensor are rearranged into columns at the destination. In this mode, the tensor has to be at least 3-dimensional.
The multicastMask operand is optional. When it is present, the Op copies data from global memory to shared memory of multiple CTAs in the cluster. Operand multicastMask specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA.
The l2CacheHint operand is optional, and it is used to specify cache eviction policy that may be used during the memory access.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_bulk_wait_group Method
cp_async_bulk_wait_group
Op waits for completion of the most recent bulk async-groups.
The $group operand tells waiting has to be done until for
The $read indicates that the waiting has to be done until all the bulk async operations in the specified bulk async-group have completed reading from their source locations.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_mbarrier_arrive Method
cp_async_mbarrier_arrive
The cp.async.mbarrier.arrive Op makes the mbarrier object track all prior cp.async operations initiated by the executing thread. The addr operand specifies the address of the mbarrier object in generic address space. The noinc attr impacts how the mbarrier's state is updated.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.cp_async_mbarrier_arrive_shared Method
cp_async_mbarrier_arrive_shared
The cp.async.mbarrier.arrive.shared Op makes the mbarrier object track all prior cp.async operations initiated by the executing thread. The addr operand specifies the address of the mbarrier object in shared memory. The noinc attr impacts how the mbarrier's state is updated.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.dot_accumulate_2way Method
dot_accumulate_2way
Performs a two-way 16-bit to 8-bit dot-product which is accumulated in a 32-bit result. Operand a is a vector of two 16-bit elements and operand b a vector of four 8-bit elements between which the dot product is computed.
The a_type and b_type attributes specify the type of the elements in a and b respectively. If a_type or b_type is s, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. If a_type or b_type is u, then the elements in the corresponding vector are zero-extended to 32-bit instead.
The b_hi boolean attribute specifies which two bytes of b are used for the dot product. If b_hi is true, then the dot product is computed between a and elements at indices 2 and 3 of b. If b_hi is false, then the dot product is computed between a and elements at indices 0 and 1 of b.
Operand c is a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any of a_type or b_type is signed.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.dot_accumulate_4way Method
dot_accumulate_4way
Performs a four-way byte dot-product which is accumulated in a 32-bit result. Operand a and b are vectors of 4 bytes between which the dot product is computed.
The a_type and b_type attributes specify the type of the elements in a and b respectively. If a_type or b_type is signed, then the elements in the corresponding vector are sign-extended to 32-bit before the dot product is computed. If a_type or b_type is unsigned, then the elements in the corresponding vector are zero-extended to 32-bit instead.
Operand c is a 32-bit integer to which the result is accumulated. It is treated as holding a signed integer if any of a_type or b_type is s8.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.elect_sync Function
elect_sync
The elect.sync instruction elects one predicated active leader thread from among a set of threads specified in the membermask. When the membermask is not provided explicitly, a default value of 0xFFFFFFFF is used. The predicate result is set to True for the leader thread, and False for all other threads.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.exit Method
exit
Ends execution of a thread. For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.fence_mbarrier_init Method
fence_mbarrier_init
Fence operation that applies on the prior nvvm.mbarrier.init
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.fence_proxy Method
fence_proxy
Fence operation with proxy to establish an ordering between memory accesses that may happen through different proxies.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.fence_proxy_acquire Method
fence_proxy_acquire
fence.proxy.acquire is a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxy
The address operand addr and the operand size together specify the memory range [addr, addr+size) on which the ordering guarantees on the memory accesses across the proxies is to be provided. The only supported value for the size operand is 128 and must be an immediate. Generic Addressing is used unconditionally, and the address specified by the operand addr must fall within the .global state space. Otherwise, the behavior is undefined
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.fence_proxy_release Method
fence_proxy_release
fence.proxy.release is a uni-directional fence used to establish ordering between a prior memory access performed via the generic proxy and a subsequent memory access performed via the tensormap proxy. fence.proxy.release operation can form a release sequence that synchronizes with an acquire sequence that contains the fence.proxy.acquire proxy fence operation
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.griddepcontrol Method
griddepcontrol
If the kind attribute is set to wait, it causes the executing thread to wait until all prerequisite grids in flight have completed and all the memory operations from the prerequisite grids are performed and made visible to the current grid.
When the kind is launch_dependents, it signals that specific dependents the runtime system designated to react to this instruction can be scheduled as soon as all other CTAs in the grid issue the same instruction or have completed.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.inline_ptx Function
inline_ptx This op allows using PTX directly within the NVVM dialect, while greatly simplifying llvm.inline_asm generation. It automatically handles register size selection and sets the correct read/write access for each operand. The operation leverages the BasicPtxBuilderInterface to abstract away low-level details of PTX assembly formatting.
The `predicate` attribute is used to specify a predicate for the
PTX instruction.
Example 1: Read-only Parameters
```mlir
nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count) : !llvm.ptr, i32
// Lowers to:
llvm.inline_asm has_side_effects asm_dialect = att
"mbarrier.init.b64 [$0], $1;", "l,r" %arg0, %arg2 : (!llvm.ptr, i32) -> ()
```
Example 2: Read-only and Write-only Parameters
```mlir
%0 = nvvm.inline_ptx "ex2.approx.ftz.f32 $0, $1;" (%input) : f32 -> f32
// Lowers to:
%0 = llvm.inline_asm has_side_effects asm_dialect = att
"ex2.approx.ftz.f32 $0, $1;", "=f,f" %arg0 : (f32) -> f32
```
Example 3: Predicate Usage
```mlir
nvvm.inline_ptx "mbarrier.init.b64 [$0], $1;" (%barrier_gen, %count),
predicate = %pred : !llvm.ptr, i32, i1
// Lowers to:
llvm.inline_asm has_side_effects asm_dialect = att
"@$2 mbarrier.init.b64 [$0], $1;", "l,r,b" %arg0, %arg2, %arg3
: (!llvm.ptr, i32, i1) -> ()
```Reactant.MLIR.Dialects.nvvm.match_sync Method
match_sync
The match.sync op performs broadcast and compare of operand val across all non-exited threads in thread_mask and returns a mask depending on the kind and an optional predicate.
The matching operation kinds are:
any: Returns a mask corresponding to the non-exited threads in the
thread_mask that have the same value of operand val.
all: Returns a mask and a predicate. If all non-exited threads in the
thread_mask have the same value of operand val, the predicate is set to true and the mask corresponds to the non-exited threads in the thread_mask. Otherwise, the predicate is set to false and the mask is 0.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive Method
mbarrier_arrive
The nvvm.mbarrier.arrive operation performs an arrive-on operation on the mbarrier object at the specified address. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like 'mbarrier.test.wait'), they synchronize with this release pattern.
This operation causes the executing thread to signal its arrival at the barrier. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation. The contents of this state value are implementation-specific.
The operation takes the following operand:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive_expect_tx Function
mbarrier_arrive_expect_tx
The nvvm.mbarrier.arrive.expect_tx operation performs an expect-tx operation followed by an arrive-on operation on the mbarrier object. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like 'mbarrier.test.wait'), they synchronize with this release pattern.
This operation first performs an expect-tx operation with the specified transaction count, then performs an arrive-on operation with an implicit count of 1. The expect-tx operation increases the tx-count of the mbarrier object by the specified expectCount value, setting the current phase to expect and tracks the completion of additional asynchronous transactions.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.txcount: An unsigned integer specifying the expected transaction count for the expect-tx operation. This represents the number of asynchronous transactions expected to complete before the barrier phase completes.predicate: Optional predicate for conditional execution.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive_expect_tx_shared Function
mbarrier_arrive_expect_tx_shared
This Op is the same as nvvm.mbarrier.arrive.expect_tx except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive_nocomplete Method
mbarrier_arrive_nocomplete
The nvvm.mbarrier.arrive.nocomplete operation performs an arrive-on operation on the mbarrier object with the guarantee that it will not cause the barrier to complete its current phase. Uses the default .release.cta semantics. This release pattern establishes memory ordering for operations occurring in program order before this arrive instruction by making operations from the current thread visible to subsequent operations in other threads within the CTA. When other threads perform corresponding acquire operations (like 'mbarrier.test.wait'), they synchronize with this release pattern.
This operation causes the executing thread to signal its arrival at the barrier with a specified count, but ensures that the barrier phase will not complete as a result of this operation. The operation returns an opaque value that captures the phase of the mbarrier object prior to the arrive-on operation.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.count: Integer specifying the count argument to the arrive-on operation. Must be in the valid range as specified in the mbarrier object contents.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive_nocomplete_shared Method
mbarrier_arrive_nocomplete_shared
This Op is the same as nvvm.mbarrier.arrive.nocomplete except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_arrive_shared Method
mbarrier_arrive_shared
This Op is the same as nvvm.mbarrier.arrive except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_init Function
mbarrier_init
The nvvm.mbarrier.init operation initializes an mbarrier object at the specified memory location.
This operation initializes the mbarrier object with the following state:
Current phase: 0
Expected arrival count:
countPending arrival count:
countTransaction count (tx-count): 0
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.count: Integer specifying the number of threads that will participate in barrier synchronization. Must be in the range [1, 2²⁰ - 1].predicate: Optional predicate for conditional execution.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_init_shared Function
mbarrier_init_shared
This Op is the same as nvvm.mbarrier.init except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_inval Method
mbarrier_inval
The nvvm.mbarrier.inval operation invalidates an mbarrier object at the specified memory location.
This operation marks the mbarrier object as invalid, making it safe to repurpose the memory location for other uses or to reinitialize it as a new mbarrier object. It is undefined behavior if the mbarrier object is already invalid.
The operation takes the following operand:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_inval_shared Method
mbarrier_inval_shared
This Op is the same as nvvm.mbarrier.inval except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_test_wait Method
mbarrier_test_wait
The nvvm.mbarrier.test.wait operation performs a non-blocking test for the completion of a specific phase of an mbarrier object. It uses the default .acquire.cta semantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from the mbarrier.arrive operation, establishing memory ordering within the CTA.
This operation tests whether the mbarrier phase specified by the state operand has completed. It is a non-blocking instruction that immediately returns the completion status without suspending the executing thread.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.state: An opaque value returned by a previousmbarrier.arriveoperation on the same mbarrier object during the current or immediately preceding phase.
The operation returns a boolean value indicating whether the specified phase has completed:
true: The immediately preceding phase has completedfalse: The phase is still incomplete (current phase)
Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:
All memory accesses (except async operations) requested prior to
mbarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread.All
cp.asyncoperations requested prior tocp.async.mbarrier.arriveby participating CTA threads are visible to the executing thread.All
cp.async.bulkoperations using the same mbarrier object requested prior tombarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread.Memory accesses requested after this wait are not visible to memory accesses performed prior to
mbarrier.arriveby other participating threads.No ordering guarantee exists for memory accesses by the same thread between
mbarrier.arriveand this wait.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_test_wait_shared Method
mbarrier_test_wait_shared
This Op is the same as nvvm.mbarrier.test.wait except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_try_wait_parity Method
mbarrier_try_wait_parity
The nvvm.mbarrier.try_wait.parity operation performs a potentially-blocking test for the completion of a specific phase of an mbarrier object using phase parity. It uses the default .acquire.cta semantics. This acquire pattern establishes memory ordering for operations occurring in program order after this wait instruction by making operations from other threads in the CTA visible to subsequent operations in the current thread. When this wait completes, it synchronizes with the corresponding release pattern from the mbarrier.arrive operation, establishing memory ordering within the CTA.
This operation waits for the completion of the mbarrier phase indicated by the phase parity. While it uses the underlying PTX mbarrier.try_wait.parity instruction, this MLIR operation generates a loop that enforces the test to complete before continuing execution, ensuring the barrier phase is actually completed rather than potentially timing out.
The operation takes the following operands:
addr: A pointer to the memory location of the mbarrier object. Uses generic addressing, but the address must still be in the shared memory space.phase: An integer specifying the phase parity (0 or 1). Even phases have parity 0, odd phases have parity 1.ticks: An unsigned integer specifying the suspend time hint in nanoseconds. This may be used instead of the system-dependent time limit.
Memory ordering guarantees: When this wait returns true, the following ordering guarantees hold:
All memory accesses (except async operations) requested prior to
mbarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread.All
cp.asyncoperations requested prior tocp.async.mbarrier.arriveby participating CTA threads are visible to the executing thread.All
cp.async.bulkoperations using the same mbarrier object requested prior tombarrier.arrivehaving release semantics by participating CTA threads are visible to the executing thread.Memory accesses requested after this wait are not visible to memory accesses performed prior to
mbarrier.arriveby other participating threads.No ordering guarantee exists for memory accesses by the same thread between
mbarrier.arriveand this wait.
Implementation behavior: This operation generates a PTX loop that repeatedly calls the underlying mbarrier.try_wait.parity instruction until the barrier phase completes. Unlike the raw PTX instruction which may return without completion after a timeout, this MLIR operation guarantees completion by continuing to loop until the specified phase is reached.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mbarrier_try_wait_parity_shared Method
mbarrier_try_wait_parity_shared
This Op is the same as nvvm.mbarrier.try_wait.parity except that the mbarrier object should be accessed using a shared-memory pointer instead of a generic-memory pointer.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.mma_sync Method
mma_sync
The nvvm.mma.sync operation collectively performs the operation D = matmul(A, B) + C using all threads in a warp.
All the threads in the warp must execute the same mma.sync operation.
For each possible multiplicand PTX data type, there are one or more possible instruction shapes given as "mMnNkK". The below table describes the posssibilities as well as the types required for the operands. Note that the data type for C (the accumulator) and D (the result) can vary independently when there are multiple possibilities in the "C/D Type" column.
When an optional attribute cannot be immediately inferred from the types of the operands and the result during parsing or validation, an error will be raised.
b1Op is only relevant when the binary (b1) type is given to multiplicandDataType. It specifies how the multiply-and-acumulate is performed and is either xor_popc or and_poc. The default is xor_popc.
intOverflowBehavior is only relevant when the multiplicandType attribute is one of u8, s8, u4, s4, this attribute describes how overflow is handled in the accumulator. When the attribute is satfinite, the accumulator values are clamped in the int32 range on overflow. This is the default behavior. Alternatively, accumulator behavior wrapped can also be specified, in which case overflow wraps from one end of the range to the other.
layoutA and layoutB are required and should generally be set to #nvvm.mma_layout<row> and #nvvm.mma_layout<col> respectively, but other combinations are possible for certain layouts according to the table below.
| A/B Type | Shape | ALayout | BLayout | A Type | B Type | C/D Type |
|----------|-----------|---------|---------|----------|----------|-------------------|
| f64 | .m8n8k4 | row | col | 1x f64 | 1x f64 | 2x f64 |
| f16 | .m8n8k4 | row/col | row/col | 2x f16x2 | 2x f16x2 | 4x f16x2 or 8xf32 |
| | .m16n8k8 | row | col | 2x f16x2 | 1x f16x2 | 2x f16x2 or 4 f32 |
| | .m16n8k16 | row | col | 4x f16x2 | 2x f16x2 | 2x f16x2 or 4 f32 |
| bf16 | .m16n8k8 | row | col | 2x i32 | 1x i32 | 4x f32 |
| | .m16n8k16 | row | col | 4x i32 | 2x i32 | 4x f32 |
| tf32 | .m16n8k4 | row | col | 2x i32 | 1x i32 | 4x f32 |
| | .m16n8k8 | row | col | 4x i32 | 2x i32 | 2x f16x2 or 4 f32 |
| u8/s8 | .m8n8k16 | row | col | 1x i32 | 1x i32 | 2x i32 |
| | .m16n8k16 | row | col | 2x i32 | 1x i32 | 4x i32 |
| | .m16n8k32 | row | col | 4x i32 | 2x i32 | 4x i32 |
| u4/s4 | .m8n8k32 | row | col | 1x i32 | 1x i32 | 2x i32 |
| | m16n8k32 | row | col | 2x i32 | 1x i32 | 4x i32 |
| | m16n8k64 | row | col | 4x i32 | 2x i32 | 4x i32 |
| b1 | m8n8k128 | row | col | 1x i32 | 1x i32 | 2x i32 |
| | m16n8k128 | row | col | 2x i32 | 1x i32 | 4x i32 |Example
%128 = nvvm.mma.sync A[%120, %121, %122, %123]
B[%124, %125]
C[%126, %127]
{layoutA = #nvvm.mma_layout<row>,
layoutB = #nvvm.mma_layout<col>,
shape = {k = 16 : i32, m = 16 : i32, n = 8 : i32}}
: (vector<2xf16>, vector<2xf16>, vector<2xf16>)
-> !llvm.struct<(vector<2xf16>, vector<2xf16>)>Reactant.MLIR.Dialects.nvvm.nanosleep Method
nanosleep
The op suspends the thread for a sleep duration approximately close to the delay $duration, specified in nanoseconds.
The sleep duration is approximated, but guaranteed to be in the interval [0, 2*t]. The maximum sleep duration is 1 millisecond. The implementation may reduce the sleep duration for individual threads within a warp such that all sleeping threads in the warp wake up together.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.pmevent Method
pmevent
Triggers one or more of a fixed number of performance monitor events, with event index or mask specified by immediate operand.
Without mask it triggers a single performance monitor event indexed by immediate operand a, in the range 0..15.
With mask it triggers one or more of the performance monitor events. Each bit in the 16-bit immediate operand controls an event.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.prefetch Function
prefetch
Prefetches the cache line containing the address given by addr. The operand may be a global, local, or generic pointer. When tensormap is specified, the operand may instead be a constant or generic pointer. If the address maps to shared memory, the operation has no effect.
At most one of cacheLevel or tensormap may be present. The cacheLevel attribute selects the target cache level. When combined with uniform, the prefetch is performed to the uniform cache, in which case addr must be a generic pointer.
When tensormap is used, the line containing addr is brought from the constant or parameter state space for later use by cp.async.bulk.tensor. If in_param_space is specified, the generic pointer is interpreted as referring to the parameter state space.
uniform can be specified after the cacheLevel to indicate that the prefetch is performed to the specified uniform cache level. If uniform is specified, addr must be a generic address pointer and no operation is performed if addr maps to a const, local, or shared memory location.
The evictPriority attribute is optional and specifies the cache eviction priority when cacheLevel is L2.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.redux_sync Method
redux_sync
redux.sync performs a reduction operation kind of the 32 bit source register across all non-exited threads in the membermask.
The abs and nan attributes can be used in the case of f32 input type, where the abs attribute causes the absolute value of the input to be used in the reduction operation, and the nan attribute causes the reduction operation to return NaN if any of the inputs to participating threads are NaN.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.shfl_sync Method
shfl_sync
The shfl.sync Op implements data shuffle within threads of a warp. The thread_mask denotes the threads participating in the Op where the bit position corresponds to a particular thread's laneid. The offset specifies a source lane or source lane offset (depending on kind). The val is the input value to be copied from the source. The mask_and_clamp contains two packed values specifying a mask for logically splitting warps into sub-segments and an upper bound for clamping the source lane index.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.st_bulk Method
st_bulk
Initializes a region of shared memory at the address given by addr. The size operand specifies the number of bytes to initialize and must be a multiple of 8. The initVal operand specifies the value to initialize the memory to. The only supported value is 0.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.stmatrix Method
stmatrix
Collectively store one or more matrices across all threads in a warp to the location indicated by the address operand ptr in shared memory.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_alloc Method
tcgen05_alloc
The tcgen05.alloc Op allocates tensor core memory for the amount specified by nCols and writes the destination address to the addr argument. The nCols operand specifies the number of columns to be allocated and it must be a power-of-two. For more information, see PTX ISA
Reactant.MLIR.Dialects.nvvm.tcgen05_commit Function
tcgen05_commit
The tcgen05.commit makes the mbarrier object, specified by the operand addr, track the completion of all the prior async-tcgen05 operations initiated by the executing thread. The multicast variants allow signaling on the mbarrier objects of multiple CTAs within the cluster. Operand multicastMask, when present, specifies the destination CTAs in the cluster such that each bit position in the 16-bit multicastMask operand corresponds to the nvvm.read.ptx.sreg.ctaid of the destination CTA. For more information, see PTX ISA
Reactant.MLIR.Dialects.nvvm.tcgen05_cp Method
tcgen05_cp
Instruction tcgen05.cp initiates an asynchronous copy operation from shared memory to the location specified by the address operand taddr in the Tensor Memory. The 64-bit register operand smem_desc specifies the matrix descriptor representing the source matrix in the shared memory that needs to be copied.
Example
nvvm.tcgen05.cp %taddr, %smem_desc {
group = #nvvm.tcgen05_group<cta_2>,
shape = #nvvm.tcgen05_cp_shape<shape_64x128b>,
multicast = #nvvm.tcgen05_cp_multicast<warpx2_01_23>,
srcFormat = #nvvm.tcgen05_cp_src_fmt<b6x16_p32>
}For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_dealloc Method
tcgen05_dealloc
The tcgen05.dealloc Op de-allocates the tensor core memory specified by tmemAddr, which must be from a previous tensor memory allocation. The nCols operand specifies the number of columns to be de-allocated, and it must be a power-of-two. For more information, see PTX ISA
Reactant.MLIR.Dialects.nvvm.tcgen05_fence Method
tcgen05_fence
The tcgen05.fence<before> orders all prior async tcgen05 operations with respect to the subsequent tcgen05 and execution ordering operations. The tcgen05.fence<after> orders all subsequent async tcgen05 operations with respect to the prior tcgen05 and execution ordering operations.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_ld Function
tcgen05_ld
Instruction tcgen05.ld asynchronously loads data from the Tensor Memory at the location specified by the 32-bit address operand tmemAddr into the destination register res, collectively across all threads of the warps.
The shape and the num attribute together determines the total dimension of the data which is loaded from the Tensor Memory. The shape attribute indicates the base dimension of data to be accessed as described in the Data Movement Shape. The num attribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.
The shape 16x32bx2 performs two accesses into Tensor Memory of the shape 16x32b. The base address of the first access is specified by tmemAddr and the base address of the second access is specified by tmemAddr + offset, where offset is an immediate argument.
The unit attribute pack can be used to pack two 16-bit elements from adjacent columns into a single 32-bit element during the load.
The following table describes the size of the vector for various combinations of num and shape attributes:
|=====================================================================|
| num/shape | 16x32bx2/16x64b/32x32b | 16x128b | 16x256b |
|=====================================================================|
| x1 | 1 | 2 | 4 |
| x2 | 2 | 4 | 8 |
| x4 | 4 | 8 | 16 |
| x8 | 8 | 16 | 32 |
| x16 | 16 | 32 | 64 |
| x32 | 32 | 64 | 128 |
| x64 | 64 | 128 | NA |
| x128 | 128 | NA | NA |
|=====================================================================|Example
nvvm.tcgen05.ld %tmemAddr, %offset pack {
shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>,
} : <2xi32>For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_mma_smem_desc Method
tcgen05_mma_smem_desc
The nvvm.tcgen05_mma_smem_desc constructs a Shared Memory descriptor for tcgen05.mma. This descriptor is a 64-bit value which describes the properties of multiplicand matrix in shared memory including its location in the shared memory of the current CTA.
+-----------+------+------------------------------------------------------+
| Bit-field | Size | Description |
+-----------+------+------------------------------------------------------+
| 0-13 | 14 | Matrix start address |
| 14-15 | 2 | Reserved |
| 16-29 | 14 | Leading dim relative-offset (or) absolute-address |
| 30-31 | 2 | Reserved |
| 32-45 | 14 | Stride dimension byte offset |
| 46-48 | 3 | Fixed constant value of 0b001 |
| 49-51 | 3 | Matrix base offset |
| 52 | 1 | Leading dimension stride mode: |
| | | 0: byte offset relative |
| | | 1: byte address absolute |
| 53-60 | 8 | Fixed constant value of 0xb00000000 |
| 61-63 | 3 | Swizzling mode: |
| | | 0: No swizzling |
| | | 1: 128-Byte with 32B atomic swizzling |
| | | 2: 128-Byte swizzling |
| | | 4: 64-Byte swizzling |
| | | 6: 32-Byte swizzling |
| | | (Values 3, 5 and 7 are invalid) |
+-----------+------+------------------------------------------------------+Example
%desc = nvvm.tcgen05.mma_smem_desc (%startAddr, %leadingDimOffset, %strideDimOffset,
%baseOffset, %leadingDimMode, %swizzleMode) : (i32, i32, i32, i8, i1, i8) -> i64For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_relinquish_alloc_permit Method
tcgen05_relinquish_alloc_permit
The tcgen05.relinquish_alloc_permit Op specifies that the CTA of the executing thread is relinquishing the right to allocate Tensor Memory. So, it is illegal for a CTA to perform tcgen05.alloc after any of its constituent threads execute tcgen05.relinquish_alloc_permit. For more information, see PTX ISA
Reactant.MLIR.Dialects.nvvm.tcgen05_shift Method
tcgen05_shift
The tcgen05.shift is an asynchronous instruction which initiates the shifting of 32-byte elements downwards across all the rows, except the last, by one row. The operand taddr specifies the base address of the matrix in Tensor Memory whose rows must be down shifted.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_st Function
tcgen05_st
Instruction tcgen05.st asynchronously stores data from the source register r into the Tensor Memory at the location specified by the 32-bit address operand tmemAddr, collectively across all threads of the warps.
The shape and the num attribute together determines the total dimension of the data which is stored to the Tensor Memory. The shape indicates the base dimension of data to be accessed. The num attribute indicates the repeat factor on the base dimension resulting in the total dimension of the data that is accessed.
The shape 16x32bx2 performs two accesses into Tensor Memory of the shape 16x32b. The base address of the first access is specified by tmemAddr and the base address of the second access is specified by tmemAddr + offset, where offset is an immediate argument.
The unit attribute unpack can be used to unpack a 32-bit element in the register into two 16-bit elements and store them in adjacent columns.
The following table describes the size of the vector for various combinations of num and shape attributes:
|=====================================================================|
| num/shape | 16x32bx2/16x64b/32x32b | 16x128b | 16x256b |
|=====================================================================|
| x1 | 1 | 2 | 4 |
| x2 | 2 | 4 | 8 |
| x4 | 4 | 8 | 16 |
| x8 | 8 | 16 | 32 |
| x16 | 16 | 32 | 64 |
| x32 | 32 | 64 | 128 |
| x64 | 64 | 128 | NA |
| x128 | 128 | NA | NA |
|=====================================================================|Example
nvvm.tcgen05.st %tmemAddr, %val, %offset unpack {
shape = #nvvm.tcgen05_ldst_shape<shape_16x32bx2>,
} : <2xi32>For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.tcgen05_wait Method
tcgen05_wait
The tcgen05.wait<load> causes the executing thread to block until all prior tcgen05.ld operations issued by the executing thread have completed. Similarly, the tcgen05.wait<store> causes the executing thread to block until all prior tcgen05.st operations issued by the executing thread have completed. For more information, see PTX ISA
Reactant.MLIR.Dialects.nvvm.vote_sync Method
vote_sync
The vote.sync op will cause executing thread to wait until all non-exited threads corresponding to membermask have executed vote.sync with the same qualifiers and same membermask value before resuming execution.
The vote operation kinds are:
any: True if source predicate is True for some thread in membermask.all: True if source predicate is True for all non-exited threads in membermask.uni: True if source predicate has the same value in all non-exited threads in membermask.ballot: In the ballot form, the destination result is a 32 bit integer. In this form, the predicate from each thread in membermask are copied into the corresponding bit position of the result, where the bit position corresponds to the thread's lane id.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.wgmma_commit_group_sync_aligned Method
wgmma_commit_group_sync_aligned
Commits all prior uncommitted warpgroup level matrix multiplication operations.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.wgmma_fence_aligned Method
wgmma_fence_aligned
Enforce an ordering of register accesses between warpgroup level matrix multiplication and other operations.
For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.wgmma_mma_async Method
wgmma_mma_async
The warpgroup (128 threads) level matrix multiply and accumulate operation has either of the following forms, where matrix D is called accumulator: D = A * B + D D = A * B, where the input from accumulator D is disabled.
Supported shapes:
|--------------|--------------|------------|--------------|---------------|
| | | | |f16+=e4m3*e4m3 |
| | | | |f16+=e5m2*e5m2 |
|f32+=tf32*tf32|f16+=f16 *f16 | s32+=s8*s8 |s32 += b1 * b1|f16+=e5m2*e4m3 |
| |f32+=f16 *f16 | s32+=u8*u8 | |f16+=e4m3*e5m2 |
| |f32+=bf16*bf16| s32+=u8*u8 | |f16+=e4m3*e5m2 |
| |f32+=bf16*bf16| s32+=s8*u8 | |f32+=e4m3*e4m3 |
| | | s32+=u8*s8 | |f32+=e5m2*e5m2 |
| | | | |f32+=e4m3*e5m2 |
| | | | |f32+=e4m3*e5m2 |
|--------------|--------------|------------|--------------|---------------|
| .m64n8k8 | .m64n8k16 | .m64n8k32 | .m64n8k256 | .m64n8k32 |
| .m64n16k8 | .m64n16k16 | .m64n16k32 | .m64n16k256 | .m64n16k32 |
| .m64n24k8 | .m64n24k16 | .m64n24k32 | .m64n24k256 | .m64n24k32 |
| .m64n32k8 | .m64n32k16 | .m64n32k32 | .m64n32k256 | .m64n32k32 |
| .m64n40k8 | .m64n40k16 | .m64n48k32 | .m64n48k256 | .m64n40k32 |
| .m64n48k8 | .m64n48k16 | .m64n64k32 | .m64n64k256 | .m64n48k32 |
| .m64n56k8 | .m64n56k16 | .m64n80k32 | .m64n80k256 | .m64n56k32 |
| .m64n64k8 | .m64n64k16 | .m64n96k32 | .m64n96k256 | .m64n64k32 |
| .m64n72k8 | .m64n72k16 | .m64n112k32| .m64n112k256 | .m64n72k32 |
| .m64n80k8 | .m64n80k16 | .m64n128k32| .m64n128k256 | .m64n80k32 |
| .m64n88k8 | .m64n88k16 | .m64n144k32| .m64n144k256 | .m64n88k32 |
| .m64n96k8 | .m64n96k16 | .m64n160k32| .m64n160k256 | .m64n96k32 |
| .m64n104k8 | .m64n104k16 | .m64n176k32| .m64n176k256 | .m64n104k32 |
| .m64n112k8 | .m64n112k16 | .m64n192k32| .m64n192k256 | .m64n112k32 |
| .m64n120k8 | .m64n120k16 | .m64n208k32| .m64n208k256 | .m64n120k32 |
| .m64n128k8 | .m64n128k16 | .m64n224k32| .m64n224k256 | .m64n128k32 |
| .m64n136k8 | .m64n136k16 | .m64n240k32| .m64n240k256 | .m64n136k32 |
| .m64n144k8 | .m64n144k16 | .m64n256k32| .m64n256k256 | .m64n144k32 |
| .m64n152k8 | .m64n152k16 | | | .m64n152k32 |
| .m64n160k8 | .m64n160k16 | | | .m64n160k32 |
| .m64n168k8 | .m64n168k16 | | | .m64n168k32 |
| .m64n176k8 | .m64n176k16 | | | .m64n176k32 |
| .m64n184k8 | .m64n184k16 | | | .m64n184k32 |
| .m64n192k8 | .m64n192k16 | | | .m64n192k32 |
| .m64n200k8 | .m64n200k16 | | | .m64n200k32 |
| .m64n208k8 | .m64n208k16 | | | .m64n208k32 |
| .m64n216k8 | .m64n216k16 | | | .m64n216k32 |
| .m64n224k8 | .m64n224k16 | | | .m64n224k32 |
| .m64n232k8 | .m64n232k16 | | | .m64n232k32 |
| .m64n240k8 | .m64n240k16 | | | .m64n240k32 |
| .m64n248k8 | .m64n248k16 | | | .m64n248k32 |
| .m64n256k8 | .m64n256k16 | | | .m64n256k32 |
|--------------|--------------|------------|--------------|---------------|For more information, see PTX ISA
sourceReactant.MLIR.Dialects.nvvm.wgmma_wait_group_sync_aligned Method
wgmma_wait_group_sync_aligned
Signal the completion of a preceding warpgroup operation.
For more information, see PTX ISA
source