You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use single native dispatch adapter for custom gradients
Replace the native per-op unordered_map of TFJ_GradFuncAdapter with a
single global dispatch adapter.
The native layer now registers CustomGradFunc per op type in the
GradOpRegistry, but always calls the same TFJ_GradFuncAdapter instance.
All opType-based routing is handled on the Java side by
DispatchingGradientAdapter.
This aligns the native implementation with the intended design:
there is only one native function pointer registered, and dispatch
logic lives entirely in Java.
Also fixes unsafe casting of Scope* to TFJ_Scope* by constructing a
temporary TFJ_Scope wrapper instead.
Copy file name to clipboardExpand all lines: tensorflow-core/tensorflow-core-native/src/main/native/org/tensorflow/internal/c_api/tfj_gradients_impl.cc
+21-15Lines changed: 21 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -19,10 +19,10 @@ limitations under the License.
0 commit comments