Skip to content

Explicitly skip cudnn_rnn/miopen_rnn tests#3113

Open
Silv3S wants to merge 1 commit intointel:mainfrom
Silv3S:skip_cudnn_rnn
Open

Explicitly skip cudnn_rnn/miopen_rnn tests#3113
Silv3S wants to merge 1 commit intointel:mainfrom
Silv3S:skip_cudnn_rnn

Conversation

@Silv3S
Copy link
Contributor

@Silv3S Silv3S commented Mar 20, 2026

Fixes #2472
As mentioned in original issue, there's no plan to support this CUDA specific aten operator on XPU backend.

@Silv3S Silv3S added disable_e2e Disable all e2e test jobs for the PR disable_distributed Disable distributed UT test jobs for the PR disable_win Disable Windows CI test jobs for the PR disable_accelerate Disable accelerate test job in PR CI testing disable_transformers Disable transformers UT test in PR CI labels Mar 20, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the XPU test skip list to explicitly skip fake-tensor RNN tests tied to CUDA-specific ATen operators that aren’t supported on the XPU backend (per issue #2472).

Changes:

  • Adds targeted skips under test_fake_tensor_xpu.py for cuDNN/miopen RNN-related coverage.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +95 to +99
"test_fake_tensor_xpu.py": (
# https://github.com/intel/torch-xpu-ops/issues/2472
# aten::_cudnn_rnn/aten::miopen_rnn not supported
"test_cudnn_rnn",
),
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR title/comment says this change explicitly skips both cudnn_rnn and miopen_rnn tests, but the skip tuple currently only includes \"test_cudnn_rnn\". Either add the corresponding miopen_rnn test(s) here (if they exist in test_fake_tensor_xpu.py), or update the PR title/comment to match what’s actually being skipped.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

disable_accelerate Disable accelerate test job in PR CI testing disable_distributed Disable distributed UT test jobs for the PR disable_e2e Disable all e2e test jobs for the PR disable_transformers Disable transformers UT test in PR CI disable_win Disable Windows CI test jobs for the PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[upstream_ut] NotImplementedError: The operator 'aten::_cudnn_rnn' is not currently implemented for the XPU devic

2 participants