I have an implicit function that is extremely expensive to evaluate. However, it is fully GPU parallelized, in that if a matrix is provided, then each row is evaluated in parallel. For this particular function, the output is a matrix of size = (something, number of rows)
Doing finite differences through this is then easily parallelizable by making each row be a finite difference "tangent" (sorry if I'm misusing that word), so that the full Jacobian can be constructed in one single function evaluation as opposed to many evaluations.
I am doing this now manually, but it would be nice if this functionality could be generalized into FiniteDiff. perhaps a dims argument could be provided to AutoFiniteDiff specifying the dimension about which the evaluation is vectorized (in my case across the rows, so dims=1).
I have an implicit function that is extremely expensive to evaluate. However, it is fully GPU parallelized, in that if a matrix is provided, then each row is evaluated in parallel. For this particular function, the output is a matrix of size = (something, number of rows)
Doing finite differences through this is then easily parallelizable by making each row be a finite difference "tangent" (sorry if I'm misusing that word), so that the full Jacobian can be constructed in one single function evaluation as opposed to many evaluations.
I am doing this now manually, but it would be nice if this functionality could be generalized into
FiniteDiff. perhaps adimsargument could be provided toAutoFiniteDiffspecifying the dimension about which the evaluation is vectorized (in my case across the rows, sodims=1).