API reference
Types and constants
EnzymeCore.ABIEnzymeCore.ActiveEnzymeCore.AnnotationEnzymeCore.BatchDuplicatedEnzymeCore.BatchDuplicatedNoNeedEnzymeCore.BatchMixedDuplicatedEnzymeCore.ConstEnzymeCore.DuplicatedEnzymeCore.DuplicatedNoNeedEnzymeCore.FFIABIEnzymeCore.ForwardModeEnzymeCore.InlineABIEnzymeCore.MixedDuplicatedEnzymeCore.ModeEnzymeCore.NonGenABIEnzymeCore.ReverseModeEnzymeCore.ReverseModeSplitEnzymeCore.ForwardEnzymeCore.ForwardWithPrimalEnzymeCore.ReverseEnzymeCore.ReverseHolomorphicEnzymeCore.ReverseHolomorphicWithPrimalEnzymeCore.ReverseSplitNoPrimalEnzymeCore.ReverseSplitWithPrimalEnzymeCore.ReverseWithPrimalEnzymeCore.EnzymeRules.AugmentedReturnEnzymeCore.EnzymeRules.FwdConfigEnzymeCore.EnzymeRules.RevConfigEnzymeTestUtils.ExprAndMsgEnzyme.Compiler.EnzymeErrorEnzyme.Compiler.CheckNan
Functions and macros
Enzyme.@import_fruleEnzyme.@import_rruleEnzyme.create_fresh_codeinfoEnzyme.gradientEnzyme.gradientEnzyme.gradient!Enzyme.guess_activityEnzyme.hvpEnzyme.hvp!Enzyme.hvp_and_gradient!Enzyme.jacobianEnzyme.jacobianEnzyme.typetreeEnzyme.unsafe_to_pointerEnzymeCore.CombinedEnzymeCore.NoPrimalEnzymeCore.ReverseSplitModifiedEnzymeCore.ReverseSplitWidthEnzymeCore.SplitEnzymeCore.WithPrimalEnzymeCore.autodiffEnzymeCore.autodiffEnzymeCore.autodiffEnzymeCore.autodiffEnzymeCore.autodiffEnzymeCore.autodiff_deferredEnzymeCore.autodiff_deferredEnzymeCore.autodiff_deferred_thunkEnzymeCore.autodiff_thunkEnzymeCore.autodiff_thunkEnzymeCore.clear_err_if_func_writtenEnzymeCore.clear_runtime_activityEnzymeCore.clear_strong_zeroEnzymeCore.compiler_job_from_backendEnzymeCore.ignore_derivativesEnzymeCore.make_zeroEnzymeCore.make_zeroEnzymeCore.make_zero!EnzymeCore.needs_primalEnzymeCore.needs_primalEnzymeCore.remake_zero!EnzymeCore.runtime_activityEnzymeCore.set_abiEnzymeCore.set_err_if_func_writtenEnzymeCore.set_runtime_activityEnzymeCore.set_strong_zeroEnzymeCore.strong_zeroEnzymeCore.within_autodiffEnzymeCore.EnzymeRules.@easy_ruleEnzymeCore.EnzymeRules._constrain_and_nameEnzymeCore.EnzymeRules._just_nameEnzymeCore.EnzymeRules._normalize_scalarrules_macro_inputEnzymeCore.EnzymeRules._unconstrainEnzymeCore.EnzymeRules.augmented_primalEnzymeCore.EnzymeRules.forwardEnzymeCore.EnzymeRules.inactiveEnzymeCore.EnzymeRules.inactive_noinlEnzymeCore.EnzymeRules.inactive_typeEnzymeCore.EnzymeRules.multiply_fwd_intoEnzymeCore.EnzymeRules.multiply_rev_intoEnzymeCore.EnzymeRules.needs_shadowEnzymeCore.EnzymeRules.noaliasEnzymeCore.EnzymeRules.overwrittenEnzymeCore.EnzymeRules.primal_typeEnzymeCore.EnzymeRules.reverseEnzymeCore.EnzymeRules.shadow_typeEnzymeCore.EnzymeRules.uses_symbolEnzymeCore.EnzymeRules.widthEnzymeTestUtils.@test_msgEnzymeTestUtils.are_activities_compatibleEnzymeTestUtils.test_forwardEnzymeTestUtils.test_reverseEnzyme.API.fast_math!Enzyme.API.inlineall!Enzyme.API.instname!Enzyme.API.looseTypeAnalysis!Enzyme.API.maxtypedepth!Enzyme.API.maxtypeoffset!Enzyme.API.memmove_warning!Enzyme.API.printactivity!Enzyme.API.printall!Enzyme.API.printdiffuse!Enzyme.API.printperf!Enzyme.API.printtype!Enzyme.API.printunnecessary!Enzyme.API.strictAliasing!Enzyme.API.typeWarning!
Documentation
Enzyme.@import_frule — Macroimport_frule(::fn, tys...)Automatically import a ChainRulesCore.frule as a custom forward mode EnzymeRule. When called in batch mode, this will end up calling the primal multiple times, which may result in incorrect behavior if the function mutates, and slow code, always. Importing the rule from ChainRules is also likely to be slower than writing your own rule, and may also be slower than not having a rule at all.
Use with caution.
Enzyme.@import_frule(typeof(Base.sort), Any);
x=[1.0, 2.0, 0.0]; dx=[0.1, 0.2, 0.3]; ddx = [0.01, 0.02, 0.03];
Enzyme.autodiff(Forward, sort, Duplicated, BatchDuplicated(x, (dx,ddx)))
Enzyme.autodiff(Forward, sort, DuplicatedNoNeed, BatchDuplicated(x, (dx,ddx)))
Enzyme.autodiff(Forward, sort, DuplicatedNoNeed, BatchDuplicated(x, (dx,)))
Enzyme.autodiff(Forward, sort, Duplicated, BatchDuplicated(x, (dx,)))
# output
(var"1" = [0.0, 1.0, 2.0], var"2" = (var"1" = [0.3, 0.1, 0.2], var"2" = [0.03, 0.01, 0.02]))
(var"1" = (var"1" = [0.3, 0.1, 0.2], var"2" = [0.03, 0.01, 0.02]),)
(var"1" = [0.3, 0.1, 0.2],)
(var"1" = [0.0, 1.0, 2.0], var"2" = [0.3, 0.1, 0.2])
Enzyme.@import_rrule — Macroimport_rrule(::fn, tys...)Automatically import a ChainRules.rrule as a custom reverse mode EnzymeRule. When called in batch mode, this will end up calling the primal multiple times which results in slower code. This macro assumes that the underlying function to be imported is read-only, and returns a Duplicated or Const object. This macro also assumes that the inputs permit a .+= operation and that the output has a valid Enzyme.make_zero function defined. It also assumes that overwritten(x) accurately describes if there is any non-preserved data from forward to reverse, not just the outermost data structure being overwritten as provided by the specification.
Finally, this macro falls back to almost always caching all of the inputs, even if it may not be needed for the derivative computation.
As a result, this auto importer is also likely to be slower than writing your own rule, and may also be slower than not having a rule at all.
Use with caution.
Enzyme.@import_rrule(typeof(Base.sort), Any);Enzyme.create_fresh_codeinfo — Methodcreate_fresh_codeinfo(fn, source, world)Create a fresh Core.CodeInfo for a generated function fn with source location source and world age world. Callers are responsible for setting ci.edges
Enzyme.gradient! — Methodgradient!(::ReverseMode, dx, f, x)Compute the gradient of an array-input function f using reverse mode, storing the derivative result in an existing array dx. Both x and dx must be Arrays of the same type.
Example:
f(x) = x[1]*x[2]
dx = [0.0, 0.0]
gradient!(Reverse, dx, f, [2.0, 3.0])
# output
([3.0, 2.0],)dx = [0.0, 0.0]
gradient!(ReverseWithPrimal, dx, f, [2.0, 3.0])
# output
(derivs = ([3.0, 2.0],), val = 6.0)Enzyme.gradient — Methodgradient(::ReverseMode, f, args...)Compute the gradient of a real-valued function f using reverse mode. For each differentiable argument, this function will allocate and return new derivative object, returning a tuple of derivatives for each argument. If an argument is not differentiable, the element of the returned tuple with be nothing.
In reverse mode (here), the derivatives will be the same type as the original argument.
This is a structure gradient. For a struct x it returns another instance of the same type, whose fields contain the components of the gradient. In the result, grad.a contains ∂f/∂x.a for any differential x.a, while grad.c == x.c for other types.
Examples:
f(x) = x[1]*x[2]
grad = gradient(Reverse, f, [2.0, 3.0])
# output
([3.0, 2.0],)grad = gradient(Reverse, only ∘ f, (a = 2.0, b = [3.0], c = "str"))
# output
((a = 3.0, b = [2.0], c = "str"),)mul(x, y) = x[1]*y[1]
grad = gradient(Reverse, mul, [2.0], [3.0])
# output
([3.0], [2.0])
grad = gradient(Reverse, mul, [2.0], Const([3.0]))
# output
([3.0], nothing)If passing a mode that returns the primal (e.g. ReverseWithPrimal), the return type will instead be a tuple where the first element contains the derivatives, and the second element contains the result of the original computation.
grad = gradient(ReverseWithPrimal, f, [2.0, 3.0])
# output
(derivs = ([3.0, 2.0],), val = 6.0)
grad = gradient(ReverseWithPrimal, mul, [2.0], [3.0])
# output
(derivs = ([3.0], [2.0]), val = 6.0)grad = gradient(ReverseWithPrimal, mul, [2.0], Const([3.0]))
# output
(derivs = ([3.0], nothing), val = 6.0)Enzyme.gradient — Methodgradient(::ForwardMode, f, x; shadows=onehot(x), chunk=nothing)Compute the gradient of an array-input function f using forward mode. The optional keyword argument shadow is a vector of one-hot vectors of type x which are used to forward-propagate into the return. For performance reasons, this should be computed once, outside the call to gradient, rather than within this call.
Example:
f(x) = x[1]*x[2]
gradient(Forward, f, [2.0, 3.0])
# output
([3.0, 2.0],)gradient(ForwardWithPrimal, f, [2.0, 3.0])
# output
(derivs = ([3.0, 2.0],), val = 6.0)gradient(Forward, f, [2.0, 3.0]; chunk=Val(1))
# output
([3.0, 2.0],)gradient(ForwardWithPrimal, f, [2.0, 3.0]; chunk=Val(1))
# output
(derivs = ([3.0, 2.0],), val = 6.0)For functions which return an AbstractArray or scalar, this function will return an AbstractArray whose shape is (size(output)..., size(input)...). No guarantees are presently made about the type of the AbstractArray returned by this function (which may or may not be the same as the input AbstractArray if provided).
For functions who return other types, this function will retun an AbstractArray of shape size(input) of values of the output type.
f(x) = [ x[1] * x[2], x[2] + x[3] ]
grad = gradient(Forward, f, [2.0, 3.0, 4.0])
# output
([3.0 2.0 0.0; 0.0 1.0 1.0],)This function supports multiple arguments and computes the gradient with respect to each
mul(x, y) = x[1]*y[2] + x[2]*y[1]
gradient(Forward, mul, [2.0, 3.0], [2.7, 3.1])
# output
([3.1, 2.7], [3.0, 2.0])This includes the ability to mark some arguments as Const if its derivative is not needed, returning nothing in the corresponding derivative map.
gradient(Forward, mul, [2.0, 3.0], Const([2.7, 3.1]))
# output
([3.1, 2.7], nothing)Enzyme.hvp! — Methodhvp!(res::X, f::F, x::X, v::X) where {F, X}Compute an in-place Hessian-vector product of an array-input scalar-output function f, as evaluated at x times the vector v. The result will be stored into res. The function still allocates and zero's a buffer to store the intermediate gradient, which is not returned to the user.
In other words, compute res .= hessian(f)(x) * v
See hvp_and_gradient! for a function to compute both the hvp and the gradient in a single call.
Example:
f(x) = sin(x[1] * x[2])
res = Vector{Float64}(undef, 2)
hvp!(res, f, [2.0, 3.0], [5.0, 2.7])
res
# output
2-element Vector{Float64}:
19.6926882637302
16.201003759768003Enzyme.hvp — Methodhvp(f::F, x::X, v::X) where {F, X}Compute the Hessian-vector product of an array-input scalar-output function f, as evaluated at x times the vector v.
In other words, compute hessian(f)(x) * v
See hvp! for a version which stores the result in an existing buffer and also hvp_and_gradient! for a function to compute both the hvp and the gradient in a single call.
Example:
f(x) = sin(x[1] * x[2])
hvp(f, [2.0, 3.0], [5.0, 2.7])
# output
2-element Vector{Float64}:
19.6926882637302
16.201003759768003Enzyme.hvp_and_gradient! — Methodhvp_and_gradient!(res::X, grad::X, f::F, x::X, v::X) where {F, X}Compute an in-place Hessian-vector product of an array-input scalar-output function f, as evaluated at x times the vector v as well as the gradient, storing the gradient into grad. Both the hessian vector product and the gradient can be computed together more efficiently than computing them separately.
The result will be stored into res. The gradient will be stored into grad.
In other words, compute res .= hessian(f)(x) * v and grad .= gradient(Reverse, f)(x)
Example:
f(x) = sin(x[1] * x[2])
res = Vector{Float64}(undef, 2)
grad = Vector{Float64}(undef, 2)
hvp_and_gradient!(res, grad, f, [2.0, 3.0], [5.0, 2.7])
res
grad
# output
2-element Vector{Float64}:
2.880510859951098
1.920340573300732Enzyme.jacobian — Methodjacobian(::ForwardMode, args...; kwargs...)Equivalent to gradient(::ForwardMode, args...; kwargs...)
Enzyme.jacobian — Methodjacobian(::ReverseMode, f, x; n_outs=nothing, chunk=nothing)
jacobian(::ReverseMode, f, x)Compute the jacobian of a array-output function f using (potentially vector) reverse mode. The chunk argument optionally denotes the chunk size to use and n_outs optionally denotes the shape of the array returned by f (e.g size(f(x))).
Example:
f(x) = [ x[1] * x[2], x[2] + x[3] ]
jacobian(Reverse, f, [2.0, 3.0, 4.0])
# output
([3.0 2.0 0.0; 0.0 1.0 1.0],)f(x) = [ x[1] * x[2], x[2] + x[3] ]
grad = jacobian(ReverseWithPrimal, f, [2.0, 3.0, 4.0])
# output
(derivs = ([3.0 2.0 0.0; 0.0 1.0 1.0],), val = [6.0, 7.0])f(x) = [ x[1] * x[2], x[2] + x[3] ]
grad = jacobian(Reverse, f, [2.0, 3.0, 4.0], n_outs=Val((2,)))
# output
([3.0 2.0 0.0; 0.0 1.0 1.0],)f(x) = [ x[1] * x[2], x[2] + x[3] ]
grad = jacobian(ReverseWithPrimal, f, [2.0, 3.0, 4.0], n_outs=Val((2,)))
# output
(derivs = ([3.0 2.0 0.0; 0.0 1.0 1.0],), val = [6.0, 7.0])This function will return an AbstractArray whose shape is (size(output)..., size(input)...). No guarantees are presently made about the type of the AbstractArray returned by this function (which may or may not be the same as the input AbstractArray if provided).
In the future, when this function is extended to handle non-array return types, this function will retun an AbstractArray of shape size(output) of values of the input type. ```
Enzyme.typetree — Functionfunction typetree(T, ctx, dl, seen=TypeTreeTable())Construct a Enzyme typetree from a Julia type.
Enzyme.unsafe_to_pointer — Functionunsafe_to_pointerEnzymeCore.autodiff — Methodautodiff(::ForwardMode, f, Activity, args::Annotation...)Auto-differentiate function f at arguments args using forward mode.
args may be numbers, arrays, structs of numbers, structs of arrays and so on. Enzyme will only differentiate in respect to arguments that are wrapped in a Duplicated or similar argument. Unlike reverse mode in autodiff, Active arguments are not allowed here, since all derivative results of immutable objects will be returned and should instead use Duplicated or variants like DuplicatedNoNeed.
Activity is the Activity of the return value, it may be:
Constif the return is not to be differentiated with respect toDuplicated, if the return is being differentiated with respect toBatchDuplicated, likeDuplicated, but computing multiple derivatives at once. All batch sizes must be the same for all arguments.
Example returning both original return and derivative:
f(x) = x*x
res, ∂f_∂x = autodiff(ForwardWithPrimal, f, Duplicated, Duplicated(3.14, 1.0))
# output
(6.28, 9.8596)Example returning just the derivative:
f(x) = x*x
∂f_∂x = autodiff(Forward, f, Duplicated, Duplicated(3.14, 1.0))
# output
(6.28,)EnzymeCore.autodiff — Methodautodiff(::ReverseMode, f, Activity, args::Annotation...)Auto-differentiate function f at arguments args using reverse mode.
Limitations:
fmay only return aReal(of a built-in/primitive type) ornothing, not an array, struct,BigFloat, etc. To handle vector-valued return types, use a mutatingf!that returnsnothingand stores it's return value in one of the arguments, which must be wrapped in aDuplicated.
args may be numbers, arrays, structs of numbers, structs of arrays and so on. Enzyme will only differentiate in respect to arguments that are wrapped in an Active (for arguments whose derivative result must be returned rather than mutated in place, such as primitive types and structs thereof) or Duplicated (for mutable arguments like arrays, Refs and structs thereof).
Activity is the Activity of the return value, it may be Const or Active.
Example:
a = 4.2
b = [2.2, 3.3]; ∂f_∂b = zero(b)
c = 55; d = 9
f(a, b, c, d) = a * √(b[1]^2 + b[2]^2) + c^2 * d^2
∂f_∂a, _, _, ∂f_∂d = autodiff(Reverse, f, Active, Active(a), Duplicated(b, ∂f_∂b), Const(c), Active(d))[1]
# output
(3.966106403010388, nothing, nothing, 54450.0)here, autodiff returns a tuple $(\partial f/\partial a, \partial f/\partial d)$, while $\partial f/\partial b$ will be added to ∂f_∂b (but not returned). c will be treated as Const(c).
One can also request the original returned value of the computation.
Example:
Enzyme.autodiff(ReverseWithPrimal, x->x*x, Active(3.0))
# output
((6.0,), 9.0)Enzyme gradients with respect to integer values are zero. Active will automatically convert plain integers to floating point values, but cannot do so for integer values in tuples and structs.
EnzymeCore.autodiff — Methodautodiff(::Function, ::Mode, args...)Specialization of autodiff to handle do argument closures.
autodiff(Reverse, Active(3.1)) do x
return x*x
end
# output
((6.2,),)EnzymeCore.autodiff — Methodautodiff(mode::Mode, f, args...)Like autodiff but will try to guess the activity of the return value.
EnzymeCore.autodiff — Methodautodiff(mode::Mode, f, ::Type{A}, args::Annotation...)Like autodiff but will try to extend f to an annotation, if needed.
EnzymeCore.autodiff_deferred — Methodautodiff_deferred(::ReverseMode, f, Activity, args::Annotation...)Same as autodiff but uses deferred compilation to support usage in GPU code, as well as high-order differentiation.
EnzymeCore.autodiff_deferred — Methodautodiff_deferred(::ForwardMode, f, Activity, args::Annotation...)Same as autodiff(::ForwardMode, f, Activity, args...) but uses deferred compilation to support usage in GPU code, as well as high-order differentiation.
EnzymeCore.autodiff_deferred_thunk — Methodautodiff_deferred_thunk(::ReverseModeSplit, TapeType::Type, ftype::Type{<:Annotation}, Activity::Type{<:Annotation}, argtypes::Type{<:Annotation}...)Provide the split forward and reverse pass functions for annotated function type ftype when called with args of type argtypes when using reverse mode.
Activity is the Activity of the return value, it may be Const, Active, or Duplicated (or its variants DuplicatedNoNeed, BatchDuplicated, and BatchDuplicatedNoNeed).
The forward function will return a tape, the primal (or nothing if not requested), and the shadow (or nothing if not a Duplicated variant), and tapes the corresponding type arguements provided.
The reverse function will return the derivative of Active arguments, updating the Duplicated arguments in place. The same arguments to the forward pass should be provided, followed by the adjoint of the return (if the return is active), and finally the tape from the forward pass.
Example:
A = [2.2]; ∂A = zero(A)
v = 3.3
function f(A, v)
res = A[1] * v
A[1] = 0
res
end
TapeType = tape_type(ReverseSplitWithPrimal, Const{typeof(f)}, Active, Duplicated{typeof(A)}, Active{typeof(v)})
forward, reverse = autodiff_deferred_thunk(ReverseSplitWithPrimal, TapeType, Const{typeof(f)}, Active{Float64}, Duplicated{typeof(A)}, Active{typeof(v)})
tape, result, shadow_result = forward(Const(f), Duplicated(A, ∂A), Active(v))
_, ∂v = reverse(Const(f), Duplicated(A, ∂A), Active(v), 1.0, tape)[1]
result, ∂v, ∂A
# output
(7.26, 2.2, [3.3])EnzymeCore.autodiff_thunk — Methodautodiff_thunk(::ForwardMode, ftype, Activity, argtypes::Type{<:Annotation}...)Provide the thunk forward mode function for annotated function type ftype when called with args of type argtypes.
Activity is the Activity of the return value, it may be Const or Duplicated (or its variants DuplicatedNoNeed, BatchDuplicated, andBatchDuplicatedNoNeed).
The forward function will return the shadow (or nothing if not a Duplicated variant) and the primal (if requested).
Example returning both the return derivative and original return:
a = 4.2
b = [2.2, 3.3]; ∂f_∂b = zero(b)
c = 55; d = 9
f(x) = x*x
forward = autodiff_thunk(ForwardWithPrimal, Const{typeof(f)}, Duplicated, Duplicated{Float64})
∂f_∂x, res = forward(Const(f), Duplicated(3.14, 1.0))
# output
(6.28, 9.8596)Example returning just the derivative:
a = 4.2
b = [2.2, 3.3]; ∂f_∂b = zero(b)
c = 55; d = 9
f(x) = x*x
forward = autodiff_thunk(Forward, Const{typeof(f)}, Duplicated, Duplicated{Float64})
∂f_∂x, = forward(Const(f), Duplicated(3.14, 1.0))
# output
(6.28,)EnzymeCore.autodiff_thunk — Methodautodiff_thunk(::ReverseModeSplit, ftype, Activity, argtypes::Type{<:Annotation}...)Provide the split forward and reverse pass functions for annotated function type ftype when called with args of type argtypes when using reverse mode.
Activity is the Activity of the return value, it may be Const, Active, or Duplicated (or its variants DuplicatedNoNeed, BatchDuplicated, and BatchDuplicatedNoNeed).
The forward function will return a tape, the primal (or nothing if not requested), and the shadow (or nothing if not a Duplicated variant), and tapes the corresponding type arguements provided.
The reverse function will return the derivative of Active arguments, updating the Duplicated arguments in place. The same arguments to the forward pass should be provided, followed by the adjoint of the return (if the return is active), and finally the tape from the forward pass.
Example:
A = [2.2]; ∂A = zero(A)
v = 3.3
function f(A, v)
res = A[1] * v
A[1] = 0
res
end
forward, reverse = autodiff_thunk(ReverseSplitWithPrimal, Const{typeof(f)}, Active, Duplicated{typeof(A)}, Active{typeof(v)})
tape, result, shadow_result = forward(Const(f), Duplicated(A, ∂A), Active(v))
_, ∂v = reverse(Const(f), Duplicated(A, ∂A), Active(v), 1.0, tape)[1]
result, ∂v, ∂A
# output
(7.26, 2.2, [3.3])EnzymeCore.ABI — Typeabstract type ABIAbstract type for what ABI will be used.
Subtypes
EnzymeCore.Active — TypeActive(x)Mark a function argument x of autodiff as active, Enzyme will auto-differentiate in respect Active arguments.
Enzyme gradients with respect to integer values are zero. Active will automatically convert plain integers to floating point values, but cannot do so for integer values in tuples and structs.
EnzymeCore.Annotation — Typeabstract type Annotation{T}Abstract type for autodiff function argument wrappers like Const, Active and Duplicated.
EnzymeCore.BatchDuplicated — TypeBatchDuplicated(x, ∂f_∂xs)Like Duplicated, except contains several shadows to compute derivatives for all at once. Argument ∂f_∂xs should be a tuple of the several values of type x.
EnzymeCore.BatchDuplicatedNoNeed — TypeBatchDuplicatedNoNeed(x, ∂f_∂xs)Like DuplicatedNoNeed, except contains several shadows to compute derivatives for all at once. Argument ∂f_∂xs should be a tuple of the several values of type x.
EnzymeCore.BatchMixedDuplicated — TypeBatchMixedDuplicated(x, ∂f_∂xs)Like MixedDuplicated, except contains several shadows to compute derivatives for all at once. Only used within custom rules.
EnzymeCore.Const — TypeConst(x)Mark a function argument x of autodiff as constant, Enzyme will not auto-differentiate in respect Const arguments.
EnzymeCore.Duplicated — TypeDuplicated(x, ∂f_∂x)Mark a function argument x of autodiff as duplicated, Enzyme will auto-differentiate in respect to such arguments, with dx acting as an accumulator for gradients (so $\partial f / \partial x$ will be added to) ∂f_∂x.
EnzymeCore.DuplicatedNoNeed — TypeDuplicatedNoNeed(x, ∂f_∂x)Like Duplicated, except also specifies that Enzyme may avoid computing the original result and only compute the derivative values. This creates opportunities for improved performance.
function square_byref(out, v)
out[] = v * v
nothing
end
out = Ref(0.0)
dout = Ref(1.0)
Enzyme.autodiff(Reverse, square_byref, DuplicatedNoNeed(out, dout), Active(1.0))
dout[]
# output
0.0For example, marking the out variable as DuplicatedNoNeed instead of Duplicated allows Enzyme to avoid computing v * v (while still computing its derivative).
This should only be used if x is a write-only variable. Otherwise, if the differentiated function stores values in x and reads them back in subsequent computations, using DuplicatedNoNeed may result in incorrect derivatives. In particular, DuplicatedNoNeed should not be used for preallocated workspace, even if the user might not care about its final value, as marking a variable as NoNeed means that reads from the variable are now undefined.
EnzymeCore.FFIABI — Typestruct FFIABI <: ABIForeign function call ABI. JIT the differentiated function, then inttoptr call the address.
EnzymeCore.ForwardMode — Typestruct ForwardMode{
ReturnPrimal,
ABI,
ErrIfFuncWritten,
RuntimeActivity,
StrongZero
} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity,StrongZero}Subtype of Mode for forward mode differentiation.
Type parameters
ReturnPrimal: whether to return the primal return value from the augmented-forward.- other parameters: see
Mode
EnzymeCore.InlineABI — Typestruct InlineABI <: ABIInlining function call ABI.
EnzymeCore.MixedDuplicated — TypeMixedDuplicated(x, ∂f_∂x)Like Duplicated, except x may contain both active [immutable] and duplicated [mutable] data which is differentiable. Only used within custom rules.
EnzymeCore.Mode — Typeabstract type Mode{ABI,ErrIfFuncWritten,RuntimeActivity,StrongZero}Abstract type for which differentiation mode will be used.
Subtypes
Type parameters
ABI: what runtimeABIto useErrIfFuncWritten: whether to error when the function differentiated is a closure and written to.RuntimeActivity: whether to enable runtime activity (default off). Runtime Activity is required is the differentiability of all mutable variables cannot be determined statically. For a deeper explanation see the FAQStrongZero: whether to enforce that propagating a zero derivative input always ends up in zero derivative outputs. This is required to avoid nan's if one of the arguments may be infinite or nan. For a deeper explanation see the FAQ
EnzymeCore.NonGenABI — Typestruct NonGenABI <: ABINon-generated function ABI.
EnzymeCore.ReverseMode — Typestruct ReverseMode{
ReturnPrimal,
RuntimeActivity,
StrongZero,
ABI,
Holomorphic,
ErrIfFuncWritten
} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity,StrongZero}Subtype of Mode for reverse mode differentiation.
Type parameters
ReturnPrimal: whether to return the primal return value from the augmented-forward pass.Holomorphic: Whether the complex result function is holomorphic and we should computed/dz- other parameters: see
Mode
EnzymeCore.ReverseModeSplit — Typestruct ReverseModeSplit{
ReturnPrimal,
ReturnShadow,
Width,
RuntimeActivity,
StrongZero,
ModifiedBetween,
ABI,
ErrFuncIfWritten
} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity,StrongZero}
WithPrimal(::Enzyme.Mode)Subtype of Mode for split reverse mode differentiation, to use in autodiff_thunk and variants.
Type parameters
ReturnShadow: whether to return the shadow return value from the augmented-forward.Width: batch size (pick0to derive it automatically)ModifiedBetween:Tupleof each argument's "modified between" state (picktrueto derive it automatically).- other parameters: see
ReverseMode
EnzymeCore.Forward — Constantconst ForwardDefault instance of ForwardMode that doesn't return the primal
EnzymeCore.ForwardWithPrimal — Constantconst ForwardWithPrimalDefault instance of ForwardMode that also returns the primal
EnzymeCore.Reverse — Constantconst ReverseDefault instance of ReverseMode that doesn't return the primal
EnzymeCore.ReverseHolomorphic — Constantconst ReverseHolomorphicHolomorphic instance of ReverseMode that doesn't return the primal
EnzymeCore.ReverseHolomorphicWithPrimal — Constantconst ReverseHolomorphicWithPrimalHolomorphic instance of ReverseMode that also returns the primal
EnzymeCore.ReverseSplitNoPrimal — Constantconst ReverseSplitNoPrimalDefault instance of ReverseModeSplit that doesn't return the primal
EnzymeCore.ReverseSplitWithPrimal — Constantconst ReverseSplitWithPrimalDefault instance of ReverseModeSplit that also returns the primal
EnzymeCore.ReverseWithPrimal — Constantconst ReverseWithPrimalDefault instance of ReverseMode that also returns the primal.
EnzymeCore.Combined — MethodCombined(::ReverseMode)Turn a ReverseModeSplit object into a ReverseMode object while preserving as many of the settings as possible.
This function acts as the identity on a ReverseMode.
See also Split.
EnzymeCore.NoPrimal — MethodNoPrimal(::Mode)Return a new mode which excludes the primal value.
EnzymeCore.ReverseSplitModified — MethodReverseSplitModified(::ReverseModeSplit, ::Val{MB})Return a new instance of ReverseModeSplit mode where ModifiedBetween is set to MB.
EnzymeCore.ReverseSplitWidth — MethodReverseSplitWidth(::ReverseModeSplit, ::Val{W})Return a new instance of ReverseModeSplit mode where Width is set to W.
EnzymeCore.Split — MethodSplit(
::ReverseMode, [::Val{ReturnShadow}, ::Val{Width}, ::Val{ModifiedBetween}, ::Val{ShadowInit}]
)Turn a ReverseMode object into a ReverseModeSplit object while preserving as many of the settings as possible. The rest of the settings can be configured with optional positional arguments of Val type.
This function acts as the identity on a ReverseModeSplit.
See also Combined.
EnzymeCore.WithPrimal — MethodWithPrimal(::Mode)Return a new mode which includes the primal value.
EnzymeCore.clear_err_if_func_written — Functionclear_err_if_func_written(::Mode)Return a new mode which doesn't throw an error for attempts to write into an unannotated function object.
EnzymeCore.clear_runtime_activity — Functionclear_runtime_activity(::Mode)Return a new mode where runtime activity analysis is deactivated. See Enzyme.Mode for more information on runtime activity.
EnzymeCore.clear_strong_zero — Functionclear_strong_zero(::Mode)Return a new mode where strong_zero is deactivated. See Enzyme.Mode for more information on strong zero.
EnzymeCore.compiler_job_from_backend — Functioncompiler_job_from_backend(::KernelAbstractions.Backend, F::Type, TT:Type)::GPUCompiler.CompilerJobReturns a GPUCompiler CompilerJob from a backend as specified by the first argument to the function.
For example, in CUDA one would do:
function EnzymeCore.compiler_job_from_backend(::CUDABackend, @nospecialize(F::Type), @nospecialize(TT::Type))
mi = GPUCompiler.methodinstance(F, TT)
return GPUCompiler.CompilerJob(mi, CUDA.compiler_config(CUDA.device()))
endEnzymeCore.ignore_derivatives — Methodignore_derivatives(x::T)::TBehaves like the identity function, but disconnects the "shadow" associated with x. This has the effect of preventing any derivatives from being propagated through x.
EnzymeCore.make_zero — Functionmake_zero(::Type{T}, seen::IdDict, prev::T, ::Val{copy_if_inactive}=Val(false))::TRecursively make a zero'd copy of the value prev of type T. The argument copy_if_inactive specifies what to do if the type T is guaranteed to be inactive, use the primal (the default) or still copy the value.
EnzymeCore.make_zero! — Functionmake_zero!(val::T, seen::IdSet{Any}=IdSet())::NothingRecursively set a variables differentiable fields to zero.
EnzymeCore.make_zero — Methodmake_zero(prev::T)Helper function to recursively make zero.
EnzymeCore.needs_primal — Methodneeds_primal(::Mode)
needs_primal(::Type{Mode})Returns true if the mode needs the primal value, otherwise false.
EnzymeCore.remake_zero! — Functionremake_zero!(val::T, seen::IdSet{Any}=IdSet())::NothingRecursively set a variables differentiable fields to zero.
EnzymeCore.runtime_activity — Methodruntime_activity(::Mode)
strong_zero(::Type{<:Mode})Returns whether the given mode has runtime activity set. For a deeper explanation of what strong zero is see the FAQ
EnzymeCore.set_abi — Functionset_abi(::Mode, ::Type{ABI})Return a new mode with its ABI set to the chosen type.
EnzymeCore.set_err_if_func_written — Functionset_err_if_func_written(::Mode)Return a new mode which throws an error for any attempt to write into an unannotated function object.
EnzymeCore.set_runtime_activity — Functionset_runtime_activity(::Mode)
set_runtime_activity(::Mode, activity::Bool)
set_runtime_activity(::Mode, config::Union{FwdConfig,RevConfig})
set_runtime_activity(::Mode, prev::Mode)Return a new mode where runtime activity analysis is activated / set to the desired value. See the FAQ for more information.
EnzymeCore.set_strong_zero — Functionset_strong_zero(::Mode)
set_strong_zero(::Mode, activity::Bool)
set_strong_zero(::Mode, config::Union{FwdConfig,RevConfig})
set_strong_zero(::Mode, prev::Mode)Return a new mode where strong zero is activated / set to the desired value. See the FAQ for more information.
EnzymeCore.strong_zero — Methodstrong_zero(::Mode)
strong_zero(::Type{<:Mode})Returns whether the given mode has strong zero set. For a deeper explanation of what strong zero is see the FAQ
EnzymeCore.within_autodiff — Methodwithin_autodiff()Returns true if within autodiff, otherwise false.
EnzymeCore.EnzymeRules.AugmentedReturn — TypeAugmentedReturn(primal, shadow, tape)Augment the primal return value of a function with its shadow, as well as any additional information needed to correctly compute the reverse pass, stored in tape.
Unless specified by the config that a variable is not overwritten, rules must assume any arrays/data structures/etc are overwritten between the forward and the reverse pass. Any floats or variables passed by value are always preserved as is (as are the arrays themselves, just not necessarily the values in the array).
See also augmented_primal.
EnzymeCore.EnzymeRules.FwdConfig — TypeFwdConfig{NeedsPrimal, NeedsShadow, Width, RuntimeActivity, StrongZero}
FwdConfigWidth{Width} = FwdConfig{<:Any, <:Any, Width}Configuration type to dispatch on in custom forward rules (see forward.
NeedsPrimalandNeedsShadow: boolean values specifying whether the primal and shadow (resp.) should be returned.Width: an integer that specifies the number of adjoints/shadows simultaneously being propagated.RuntimeActivity: whether runtime activity is enabled. See the FAQ for more information.StrongZero: whether strong zero is enabled. See the FAQ for more information.
Getters for the type parameters are provided by needs_primal, needs_shadow, width runtime_activity, and strong_zero.
EnzymeCore.EnzymeRules.RevConfig — TypeRevConfig{NeedsPrimal, NeedsShadow, Width, Overwritten, RuntimeActivity, StrongZero}
RevConfigWidth{Width} = RevConfig{<:Any, <:Any, Width}Configuration type to dispatch on in custom reverse rules (see augmented_primal and reverse).
NeedsPrimalandNeedsShadow: boolean values specifying whether the primal and shadow (resp.) should be returned.Width: an integer that specifies the number of adjoints/shadows simultaneously being propagated.Overwritten: a tuple of booleans of whether each argument (including the function itself) is modified between the forward and reverse pass (true if potentially modified between).RuntimeActivity: whether runtime activity is enabled. See the FAQ for more information.StrongZero: whether strong zero is enabled. See the FAQ for more information.
Getters for the type parameters are provided by needs_primal, needs_shadow, width, overwritten, runtime_activity, and strong_zero.
EnzymeCore.EnzymeRules.@easy_rule — Macro@easy_rule(f(x₁::T1, x₂::T2, ...),
@setup(statement₁, statement₂, ...),
(∂f₁_∂x₁, ∂f₁_∂x₂, ...),
(∂f₂_∂x₁, ∂f₂_∂x₂, ...),
...)A convenience macro that generates simple forward and reverse Enzyme rules using the provided partial derivatives.
The easy_rule macro assumes that the function f does not mutate any of its arguments, does not read from global data, and no output aliases with any other output nor input.
The easy_rule macro only works if inputs are scalars or arrays of scalars, and the result is a scalar, array, or tuple thereof.
For each output result (a single output is assumed if a scalar is returned), a tuple of partial derivatives is expected. Specifically, each tuple contains one entry for each argument to f. This entry should contain the jacobian ∂fi_∂xj where i is the number of the output, and j is the number of the input. If both input i and output j are scalars, ∂fi_∂xj should be a scalar. If at least one of these are an abstractarray, ∂fi_∂xj should be a tensor of size (size(output[j])..., size(input[i])...), where scalars are considered zero-dimensional.
The arguments to f can either have no type constraints, or specific type constraints.
The result of f(x₁, x₂, ...) is automatically bound to Ω. This allows the primal result to be referenced (as Ω) within the derivative/setup expressions.
At present this does not support defining for closures/functors.
The @setup argument can be elided if no setup code is need. In other words:
@easy_rule(f(x₁, x₂, ...),
(∂f₁_∂x₁, ∂f₁_∂x₂, ...),
(∂f₂_∂x₁, ∂f₂_∂x₂, ...),
...)is equivalent to:
@easy_rule(f(x₁, x₂, ...),
@setup(nothing),
(∂f₁_∂x₁, ∂f₁_∂x₂, ...),
(∂f₂_∂x₁, ∂f₂_∂x₂, ...),
...)If a specific argument has no partial derivative, then all corresponding argument values can instead be marked @Constant. For example, consider the case where config has no derivative.
@easy_rule(f(config, x, ...),
@setup(nothing),
(@Constant, ∂f₁_∂x, ...),
(@Constant, ∂f₂_∂x, ...),
...)EnzymeCore.EnzymeRules._constrain_and_name — Method_constrain_and_name(arg::Expr, _)Internal function.
Turn both a and ::constraint into a::Annotation{<:constraint} etc
EnzymeCore.EnzymeRules._just_name — Method_just_name(arg::Expr, _)Internal function.
Extract a from a::constraint.
EnzymeCore.EnzymeRules._normalize_scalarrules_macro_input — Method_normalize_scalarrules_macro_input(call, maybe_setup, partials)Internal function.
returns (in order) the correctly escaped:
callwith out any type constraintssetup_stmts: the content of@setupor[]if that is not provided,inputs: with all args having the constraints removed from call, or defaulting toNumberpartials: which are allExpr{:tuple,...}
EnzymeCore.EnzymeRules._unconstrain — Method_unconstrain(a)Internal function.
Turn both a and a::S into a
EnzymeCore.EnzymeRules.augmented_primal — Functionaugmented_primal(::RevConfig, func::Annotation{typeof(f)}, RT::Type{<:Annotation}, args::Annotation...)Must return an AugmentedReturn type.
- The primal must be the same type of the original return if
needs_primal(config), otherwise nothing. - The shadow must be nothing if needs_shadow(config) is false. If width is 1, the shadow should be the same type of the original return. If the width is greater than 1, the shadow should be NTuple{original return, width}.
- The tape can be any type (including Nothing) and is preserved for the reverse call.
EnzymeCore.EnzymeRules.forward — Functionforward(fwdconfig, func::Annotation{typeof(f)}, RT::Type{<:Annotation}, args::Annotation...)Calculate the forward derivative. The first argument is a `FwdConfig object describing parameters of the differentiation. The second argument func is the callable for which the rule applies to. Either wrapped in a Const), or a Duplicated if it is a closure. The third argument is the return type annotation, and all other arguments are the annotated function arguments.
EnzymeCore.EnzymeRules.inactive — Functioninactive(func::typeof(f), args...)Mark a particular function as always being inactive in both its return result and the function call itself.
EnzymeCore.EnzymeRules.inactive_noinl — Functioninactive_noinl(func::typeof(f), args...)Mark a particular function as always being inactive in both its return result and the function call itself, but do not prevent inlining of the function.
EnzymeCore.EnzymeRules.inactive_type — Methodinactive_type(::Type{Ty})Mark a particular type Ty as always being inactive.
EnzymeCore.EnzymeRules.multiply_fwd_into — Functionmultiply_fwd_into(prev, partial, dx)Internal function.
Multiply a partial derivative (df/dx) by its shadow input (dx) to form df.
Specifically, perform prev + partial * dx, returning the result or re-using prev's memory, where applicable.
EnzymeCore.EnzymeRules.multiply_rev_into — Functionmultiply_rev_into(prev, partial, df)Internal function.
Multiply a partial derivative (df/dx) by its shadow input (df) to form dx.
Specifically, perform prev + conj(partial * conj(dx)), returning the result or re-using prev's memory, where applicable.
EnzymeCore.EnzymeRules.needs_shadow — Methodneeds_shadow(::FwdConfig)
needs_shadow(::RevConfig)
needs_shadow(::Type{<:FwdConfig})
needs_shadow(::Type{<:RevConfig})Whether a custom rule should return the shadow (derivative) of the function result.
EnzymeCore.EnzymeRules.noalias — Functionnoalias(func::typeof(f), args...)Mark a particular function as always being a fresh allocation which does not alias any other accessible memory.
EnzymeCore.EnzymeRules.overwritten — Methodoverwritten(::RevConfig)
overwritten(::Type{<:RevConfig})A tuple of booleans for each argument (including the function itself), indicating if it is modified between the forward and reverse pass (true if potentially modified between).
EnzymeCore.EnzymeRules.primal_type — Methodprimal_type(::FwdConfig, ::Type{<:Annotation{RT}})
primal_type(::RevConfig, ::Type{<:Annotation{RT}})Compute the exepcted primal return type given a reverse mode config and return activity
EnzymeCore.EnzymeRules.reverse — Functionreverse(::RevConfig, func::Annotation{typeof(f)}, dret::Active, tape, args::Annotation...)
reverse(::RevConfig, func::Annotation{typeof(f)}, ::Type{<:Annotation), tape, args::Annotation...)Takes gradient of derivative, activity annotation, and tape. If there is an active return dret is passed as Active{T} with the derivative of the active return val. Otherwise dret is passed as Type{Duplicated{T}}, etc.
EnzymeCore.EnzymeRules.shadow_type — Methodshadow_type(::FwdConfig, ::Type{<:Annotation{RT}})
shadow_type(::RevConfig, ::Type{<:Annotation{RT}})Compute the exepcted shadow return type given a reverse mode config and return activity
EnzymeCore.EnzymeRules.uses_symbol — Methoduses_symbol(a, b::Symbol)Internal function.
Checks if a contains a use of the symbol b.
EnzymeCore.EnzymeRules.width — Methodwidth(::FwdConfig)
width(::RevConfig)
width(::Type{<:FwdConfig})
width(::Type{<:RevConfig})Get the size of a batch
EnzymeCore.needs_primal — Methodneeds_primal(::FwdConfig)
needs_primal(::RevConfig)
needs_primal(::Type{<:FwdConfig})
needs_primal(::Type{<:RevConfig})Whether a custom rule should return the original result of the function.
EnzymeTestUtils.ExprAndMsg — TypeA cunning hack to carry extra message along with the original expression in a test
EnzymeTestUtils.@test_msg — Macro@test_msg msg condion kws...This is per Test.@test condion kws... except that if it fails it also prints the msg. If msg=="" then this is just like @test, nothing is printed
Examles
julia> @test_msg "It is required that the total is under 10" sum(1:1000) < 10;
Test Failed at REPL[1]:1
Expression: sum(1:1000) < 10
Problem: It is required that the total is under 10
Evaluated: 500500 < 10
ERROR: There was an error during testing
julia> @test_msg "It is required that the total is under 10" error("not working at all");
Error During Test at REPL[2]:1
Test threw exception
Expression: error("not working at all")
Problem: It is required that the total is under 10
"not working at all"
Stacktrace:
julia> a = "";
julia> @test_msg a sum(1:1000) < 10;
Test Failed at REPL[153]:1
Expression: sum(1:1000) < 10
Evaluated: 500500 < 10
ERROR: There was an error during testingEnzymeTestUtils.are_activities_compatible — Methodare_activities_compatible(Tret, activities...) -> BoolReturn true if return activity type Tret and activity types activities are compatible.
EnzymeTestUtils.test_forward — Methodtest_forward(f, Activity, args...; kwargs...)Test Enzyme.autodiff of f in Forward-mode against finite differences.
f has all constraints of the same argument passed to Enzyme.autodiff, with additional constraints:
- If it mutates one of its arguments, it must return that argument.
Arguments
Activity: the activity of the return value offargs: Each entry is either an argument tof, an activity type accepted byautodiff, or a tuple of the form(arg, Activity), whereActivityis the activity type ofarg. If the activity type specified requires a tangent, a random tangent will be automatically generated.
Keywords
rng::AbstractRNG: The random number generator to use for generating random tangents.fdm=FiniteDifferences.central_fdm(5, 1): The finite differences method to use.fkwargs: Keyword arguments to pass tof.rtol: Relative tolerance forisapprox.atol: Absolute tolerance forisapprox.testset_name: Name to use for a testset in which all tests are evaluated.
Examples
Here we test a rule for a function of scalars. Because we don't provide an activity annotation for y, it is assumed to be Const.
using Enzyme, EnzymeTestUtils
x, y = randn(2)
for Tret in (Const, Duplicated, DuplicatedNoNeed), Tx in (Const, Duplicated)
test_forward(*, Tret, (x, Tx), y)
endHere we test a rule for a function of an array in batch forward-mode:
x = randn(3)
y = randn()
for Tret in (Const, BatchDuplicated, BatchDuplicatedNoNeed),
Tx in (Const, BatchDuplicated),
Ty in (Const, BatchDuplicated)
test_forward(*, Tret, (x, Tx), (y, Ty))
endEnzymeTestUtils.test_reverse — Methodtest_reverse(f, Activity, args...; kwargs...)Test Enzyme.autodiff_thunk of f in ReverseSplitWithPrimal-mode against finite differences.
f has all constraints of the same argument passed to Enzyme.autodiff_thunk, with additional constraints:
- If an
Array{<:AbstractFloat}appears in the input/output, then a reshaped version of it may not also appear in the input/output.
Arguments
Activity: the activity of the return value off.args: Each entry is either an argument tof, an activity type accepted byautodiff, or a tuple of the form(arg, Activity), whereActivityis the activity type ofarg. If the activity type specified requires a shadow, one will be automatically generated.
Keywords
rng::AbstractRNG: The random number generator to use for generating random tangents.fdm=FiniteDifferences.central_fdm(5, 1): The finite differences method to use.fkwargs: Keyword arguments to pass tof.rtol: Relative tolerance forisapprox.atol: Absolute tolerance forisapprox.testset_name: Name to use for a testset in which all tests are evaluated.output_tangent: Optional final tangent to provide at the beginning of the reverse-mode differentiation
Examples
Here we test a rule for a function of scalars. Because we don't provide an activity annotation for y, it is assumed to be Const.
using Enzyme, EnzymeTestUtils
x = randn()
y = randn()
for Tret in (Const, Active), Tx in (Const, Active)
test_reverse(*, Tret, (x, Tx), y)
endHere we test a rule for a function of an array in batch reverse-mode:
x = randn(3)
for Tret in (Const, Active), Tx in (Const, BatchDuplicated)
test_reverse(prod, Tret, (x, Tx))
endEnzyme.API.fast_math! — Methodfast_math!(val::Bool)Whether generated derivatives have fast math on or off, default on.
Enzyme.API.inlineall! — Methodinlineall!(val::Bool)Whether to inline all (non-recursive) functions generated by Julia within a single compilation unit. This may improve Enzyme's ability to successfully differentiate code and improve performance of the original and generated derivative program. It often, however, comes with an increase in compile time. This is off by default.
Enzyme.API.instname! — Methodinstname!(val::Bool)Whether to add a name to all LLVM values. This may be helpful for debugging generated programs, both primal and derivative. Off by default.
Enzyme.API.looseTypeAnalysis! — MethodlooseTypeAnalysis!(val::Bool)Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. For example, a copy of Float32's requires a different derivative than a memcpy of Float64's, Ptr's, etc. In some cases Enzyme may not be able to deduce all the types necessary and throw an unknown type error. If this is the case, open an issue. One can silence these issues by setting looseTypeAnalysis!(true) which tells Enzyme to make its best guess. This will remove the error and allow differentiation to continue, however, it may produce incorrect results. Alternatively one can consider increasing the space of the evaluated type lattice which gives Enzyme more time to run a more thorough analysis through the use of maxtypeoffset!
Enzyme.API.maxtypedepth! — Methodmaxtypedepth!(val::Int)Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. To ensure this analysis temrinates, it operates on a finite lattice of possible states. This function sets the maximum depth into a type that Enzyme will consider. A smaller value will cause type analysis to run faster, but may result in some necessary types not being found and result in unknown type errors. A larger value may result in unknown type errors being resolved by searching a larger space, but may run longer. The default setting is 6.
Enzyme.API.maxtypeoffset! — Methodmaxtypeoffset!(val::Int)Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. To ensure this analysis temrinates, it operates on a finite lattice of possible states. This function sets the maximum offset into a type that Enzyme will consider. A smaller value will cause type analysis to run faster, but may result in some necessary types not being found and result in unknown type errors. A larger value may result in unknown type errors being resolved by searching a larger space, but may run longer. The default setting is 512.
Enzyme.API.memmove_warning! — Methodmemmove_warning!(val::Bool)Whether to issue a warning when differentiating memmove. Off by default.
Enzyme.API.printactivity! — Methodprintactivity!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) a log of all decisions made during Activity Analysis (the analysis which determines what values/instructions are differentiated). This may be useful for debugging MixedActivity errors, correctness, and performance errors. Off by default
Enzyme.API.printall! — Methodprintall!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) the LLVM function being differentiated, as well as all generated derivatives immediately after running Enzyme (but prior to any other optimizations). Off by default
Enzyme.API.printdiffuse! — Methodprintdiffuse!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) information about each LLVM value – specifically whether it and its shadow is required for computing the derivative. In contrast to printunnecessary!, this flag prints debug log for the analysis which determines for each value and shadow value, whether it can find a user which would require it to be kept around (rather than being deleted). This is prior to any cache optimizations and a debug log of Differential Use Analysis. This may be helpful for debugging caching, phi node deletion, performance, and other errors. Off by default
Enzyme.API.printperf! — Methodprintperf!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) performance information about generated derivative programs. It will provide debug information that warns why particular values are cached for the reverse pass, and thus require additional computation/storage. This is particularly helpful for debugging derivatives which OOM or otherwise run slow. ff by default
Enzyme.API.printtype! — Methodprinttype!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) a log of all decisions made during Type Analysis (the analysis which Enzyme determines the type of all values in the program). This may be useful for debugging correctness errors, illegal type analysis errors, insufficient type information errors, correctness, and performance errors. Off by default
Enzyme.API.printunnecessary! — Methodprintunnecessary!(val::Bool)An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) information about each LLVM value – specifically whether it and its shadow is required for computing the derivative. In contrast to printdiffuse!, this flag prints the final results after running cache optimizations such as minCut (see Recompute vs Cache Heuristics from this paper and slides 31-33 from this presentation) for a description of the caching algorithm. This may be helpful for debugging caching, phi node deletion, performance, and other errors. Off by default
Enzyme.API.strictAliasing! — MethodstrictAliasing!(val::Bool)Whether Enzyme's type analysis will assume strict aliasing semantics. When strict aliasing semantics are on (the default), Enzyme can propagate type information up through conditional branches. This may lead to illegal type errors when analyzing code with unions. Disabling strict aliasing will enable these union types to be correctly analyzed. However, it may lead to some errors that sufficient type information cannot be deduced. One can turn these insufficient type information errors into to warnings by calling looseTypeAnalysis!(true) which tells Enzyme to use its best guess in such scenarios.
Enzyme.API.typeWarning! — MethodtypeWarning!(val::Bool)Whether to print a warning when Type Analysis learns informatoin about a value's type which cannot be represented in the current size of the lattice. See maxtypeoffset! for more information. Off by default.
Enzyme.Compiler.CheckNan — ConstantCheckNan::Ref{Bool}If Enzyme.Compiler.CheckNan[] == true, Enzyme will error at the first encounter of a NaN during differentiation. Useful as a debugging tool to help locate the call whose derivative is the source of unexpected NaNs. Off by default.